Science.gov

Sample records for autonomous vision system

  1. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  2. Spherical vision cameras in a semi-autonomous wheelchair system.

    PubMed

    Nguyen, Jordan S; Su, Steven W; Nguyen, Hung T

    2010-01-01

    This paper is concerned with the methods developed for extending the capabilities of a spherical vision camera system to allow detection of surrounding objects and whether or not they pose a danger for movement in that direction during autonomous navigation of a power wheelchair. A Point Grey Research (PGR) Ladybug2 spherical vision camera system was attached to the power wheelchair for surrounding vision. The objective is to use this Ladybug2 system to provide information about obstacles all around the wheelchair and aid the automated decision-making process involved during navigation. Through instantaneous neural network classification of individual camera images to determine whether obstacles are present, detection of obstacles have been successfully achieved with accuracies reaching 96%. This assistive technology has the purpose of automated obstacle detection, navigational path planning and decision-making, and collision avoidance during navigation. PMID:21097098

  3. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  4. SUMO/FREND: vision system for autonomous satellite grapple

    NASA Astrophysics Data System (ADS)

    Obermark, Jerome; Creamer, Glenn; Kelm, Bernard E.; Wagner, William; Henshaw, C. Glen

    2007-04-01

    SUMO/FREND is a risk reduction program for an advanced servicing spacecraft sponsored by DARPA and executed by the Naval Center for Space Technology at the Naval Research Laboratory in Washington, DC. The overall program will demonstrate the integration of many techniques needed in order to autonomously rendezvous and capture customer satellites at geosynchronous orbits. A flight-qualifiable payload is currently under development to prove out challenging aspects of the mission. The grappling process presents computer vision challenges to properly identify and guide the final step in joining the pursuer craft to the customer. This paper will provide an overview of the current status of the project with an emphasis on the challenges, techniques, and directions of the machine vision processes to guide the grappling.

  5. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  6. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  7. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    PubMed Central

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  8. Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision.

    PubMed

    Nguyen, Jordan S; Tran, Yvonne; Su, Steven W; Nguyen, Hung T

    2011-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions. PMID:22255649

  9. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  10. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  11. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  12. Application of a hybrid digital-optical cross-correlator as a semi-autonomous vision system

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1993-01-01

    We describe a complex optical system consisting of a 4f optical correlator with programmable filters under control of a digital on-board computer that operates at video rates for filter generation, storage, and management. It gives intelligent vision to a semi-autonomous vehicle, with ability to recognize immediate danger to its survival in the near term and ability to pursue navigational goals on the basis of tracking the previously identified features.

  13. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  14. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  15. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  16. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  17. Computer vision for autonomous robotics in space

    NASA Astrophysics Data System (ADS)

    Wong, Andrew K. C.

    1993-08-01

    This paper presents a computer vision system being developed at the Pattern Analysis and Machine Intelligence (PAMI) Lab of the University of Waterloo and at the Vision, Intelligence and Robotics Technologies Corporation (VIRTEK) in support of the Canadian Space Autonomous Robotics Project. This system was originally developed for flexible manufacturing and guidance of autonomous roving vehicles. In the last few years, it has been engineered to support the operations of the Mobile Service System (MSS) (or its equivalence) for the Space Station Project. In the near term, this vision system will provide vision capability for the recognition, location and tracking of payloads as well as for relating the spatial information to the manipulator for capturing, manipulating and berthing payloads. In the long term, it will serve in the role of inspection, surveillance and servicing of the Station. Its technologies will be continually expanded and upgraded to meet the demand as the needs of the Space Station evolve and grow. Its spin-off technologies will benefit the industrial sectors as well.

  18. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  19. The Autonomous Helicopter System

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.

    1984-06-01

    This paper describes an autonomous airborne vehicle being developed at the Georgia Tech Engineering Experiment Station. The Autonomous Helicopter System (AHS) is a multi-mission system consisting of three distinct sections: vision, planning and control. Vision provides the local and global scene analysis which is symbolically represented and passed to planning as the initial route planning constraints. Planning generates a task dependent path for the vehicle to traverse which assures maximum mission system success as well as safety. Control validates the path and either executes the given route or feeds back to previous sections in order to resolve conflicts.

  20. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  1. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  2. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  3. Infrared sensors and systems for enhanced vision/autonomous landing applications

    NASA Technical Reports Server (NTRS)

    Kerr, J. Richard

    1993-01-01

    There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.

  4. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.

  5. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  6. Multisensorial Vision For Autonomous Vehicle Driving

    NASA Astrophysics Data System (ADS)

    Giusto, Daniele D.; Regazzoni, Carlo S.; Vernazza, Gianni L.

    1989-03-01

    A multisensorial vision system for autonomous vehicle driving is presented, that operates in outdoor natural environments. The system, currently under development in our laboratories, will be able to integrate data provided by different sensors in order to achieve a more reliable description of a scene and to meet safety requirements. We chose to perform a high-level symbolic fusion of the data to better accomplish the recognition task. A knowledge-based approach is followed, which provides a more accurate solution; in particular, it will be possible to integrate both physical data, furnished by each channel, and different fusion strategies, by using an appropriate control structure. The high complexity of data integration is reduced by acquiring, filtering, segmenting and extracting features from each sensor channel. Production rules, divided into groups according to specific goals, drive the fusion process, linking to a symbolic frame all the segmented regions characterized by similar properties. As a first application, road and obstacle detection is performed. A particular fusion strategy is tested that integrates results separately obtained by applying the recognition module to each different sensor according to the related model description. Preliminary results are very promising and confirm the validity of the proposed approach.

  7. INL Autonomous Navigation System

    Energy Science and Technology Software Center (ESTSC)

    2005-03-30

    The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.

  8. Autonomic Nervous System Disorders

    MedlinePlus

    Your autonomic nervous system is the part of your nervous system that controls involuntary actions, such as the beating of your heart ... breathing and swallowing Erectile dysfunction in men Autonomic nervous system disorders can occur alone or as the result ...

  9. Autonomic Nervous System Disorders

    MedlinePlus

    Your autonomic nervous system is the part of your nervous system that controls involuntary actions, such as the beating ... with breathing and swallowing Erectile dysfunction in men Autonomic nervous system disorders can occur alone or as ...

  10. Autonomous robot vision software design using Matlab toolboxes

    NASA Astrophysics Data System (ADS)

    Tedder, Maurice; Chung, Chan-Jin

    2004-10-01

    The purpose of this paper is to introduce a cost-effective way to design robot vision and control software using Matlab for an autonomous robot designed to compete in the 2004 Intelligent Ground Vehicle Competition (IGVC). The goal of the autonomous challenge event is for the robot to autonomously navigate an outdoor obstacle course bounded by solid and dashed lines on the ground. Visual input data is provided by a DV camcorder at 160 x 120 pixel resolution. The design of this system involved writing an image-processing algorithm using hue, satuaration, and brightness (HSB) color filtering and Matlab image processing functions to extract the centroid, area, and orientation of the connected regions from the scene. These feature vectors are then mapped to linguistic variables that describe the objects in the world environment model. The linguistic variables act as inputs to a fuzzy logic controller designed using the Matlab fuzzy logic toolbox, which provides the knowledge and intelligence component necessary to achieve the desired goal. Java provides the central interface to the robot motion control and image acquisition components. Field test results indicate that the Matlab based solution allows for rapid software design, development and modification of our robot system.

  11. An autonomous vision-based mobile robot

    NASA Astrophysics Data System (ADS)

    Baumgartner, Eric Thomas

    This dissertation describes estimation and control methods for use in the development of an autonomous mobile robot for structured environments. The navigation of the mobile robot is based on precise estimates of the position and orientation of the robot within its environment. The extended Kalman filter algorithm is used to combine information from the robot's drive wheels with periodic observations of small, wall-mounted, visual cues to produce the precise position and orientation estimates. The visual cues are reliably detected by at least one video camera mounted on the mobile robot. Typical position estimates are accurate to within one inch. A path tracking algorithm is also developed to follow desired reference paths which are taught by a human operator. Because of the time-independence of the tracking algorithm, the speed that the vehicle travels along the reference path is specified independent from the tracking algorithm. The estimation and control methods have been applied successfully to two experimental vehicle systems. Finally, an analysis of the linearized closed-loop control system is performed to study the behavior and the stability of the system as a function of various control parameters.

  12. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  13. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  14. Highly Autonomous Systems Workshop

    NASA Technical Reports Server (NTRS)

    Doyle, R.; Rasmussen, R.; Man, G.; Patel, K.

    1998-01-01

    It is our aim by launching a series of workshops on the topic of highly autonomous systems to reach out to the larger community interested in technology development for remotely deployed systems, particularly those for exploration.

  15. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  16. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Simpson, James

    2010-01-01

    The Autonomous Flight Safety System (AFSS) is an independent self-contained subsystem mounted onboard a launch vehicle. AFSS has been developed by and is owned by the US Government. Autonomously makes flight termination/destruct decisions using configurable software-based rules implemented on redundant flight processors using data from redundant GPS/IMU navigation sensors. AFSS implements rules determined by the appropriate Range Safety officials.

  17. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  18. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1989-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  19. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1986-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  20. Autonomous Vision-Based Tethered-Assisted Rover Docking

    NASA Technical Reports Server (NTRS)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  1. Control System Validation In The Autonomous Helicopter

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Fugedy, John; Friedel, Thomas

    1989-03-01

    Autonomous systems require the ability to analyze their environment and develop responsive plans of action. Autonomous vehicle research has led to the development of several land, sea, and air vehicle prototypes. These systems integrate vision, diagnostics, planning, situation assessment, tactical reasoning, and intelligent control at a variety of levels to function in limited environments or computer simulation. Route planning in these systems has historically focused on pure numerical computations unable to adapt to the dynamic nature of the world. This paper describes a knowledge-based system for autonomous route planning that has been applied to airborne vehicles. Specific focus is the vehicle model knowledge source that validates routes based upon the physical capabilities of the helicopter system. An overview of the autonomous helicopter is present to establish system context with specific results in validated route planning presented.

  2. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.

    1990-01-01

    The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.

  3. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.

    1990-01-01

    The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling and dynamic replanning.

  4. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.

    1990-01-01

    The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling an dynamic replanning.

  5. Vision-based sensing for autonomous in-flight refueling

    NASA Astrophysics Data System (ADS)

    Scott, D.; Toal, M.; Dale, J.

    2007-04-01

    A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.

  6. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  7. Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision

    NASA Astrophysics Data System (ADS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-06-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  8. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  9. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  10. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  11. A design approach for small vision-based autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Edwards, Barrett B.; Fife, Wade S.; Archibald, James K.; Lee, Dah-Jye; Wilde, Doran K.

    2006-10-01

    This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding environment. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.

  12. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Santuro, Steve; Simpson, James; Zoerner, Roger; Bull, Barton; Lanzi, Jim

    2004-01-01

    Autonomous Flight Safety System (AFSS) is an independent flight safety system designed for small to medium sized expendable launch vehicles launching from or needing range safety protection while overlying relatively remote locations. AFSS replaces the need for a man-in-the-loop to make decisions for flight termination. AFSS could also serve as the prototype for an autonomous manned flight crew escape advisory system. AFSS utilizes onboard sensors and processors to emulate the human decision-making process using rule-based software logic and can dramatically reduce safety response time during critical launch phases. The Range Safety flight path nominal trajectory, its deviation allowances, limit zones and other flight safety rules are stored in the onboard computers. Position, velocity and attitude data obtained from onboard global positioning system (GPS) and inertial navigation system (INS) sensors are compared with these rules to determine the appropriate action to ensure that people and property are not jeopardized. The final system will be fully redundant and independent with multiple processors, sensors, and dead man switches to prevent inadvertent flight termination. AFSS is currently in Phase III which includes updated algorithms, integrated GPS/INS sensors, large scale simulation testing and initial aircraft flight testing.

  13. Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter

    NASA Technical Reports Server (NTRS)

    Rock, Stephen M.

    1999-01-01

    This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.

  14. Design of a Vision-Based Sensor for Autonomous Pig House Cleaning

    NASA Astrophysics Data System (ADS)

    Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael

    2005-12-01

    Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.

  15. Visual navigation system for autonomous indoor blimps

    NASA Astrophysics Data System (ADS)

    Campos, Mario F.; de Souza Coelho, Lucio

    1999-07-01

    Autonomous dirigibles - aerial robots that are a blimp controlled by computer based on information gathered by sensors - are a new and promising research field in Robotics, offering several original work opportunities. One of them is the study of visual navigation of UAVs. In the work described in this paper, a Computer Vision and Control system was developed to perform automatically very simple navigation task for a small indoor blimp. The vision system is able to track artificial visual beacons - objects with known geometrical properties - and from them a geometrical methodology can extract information about orientation of the blimp. The tracking of natural landmarks is also a possibility for the vision technique developed. The control system uses that data to keep the dirigible on a programmed orientation. Experimental results showing the correct and efficient functioning of the system are shown and have your implications and future possibilities discussed.

  16. [Quality system Vision 2000].

    PubMed

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments. PMID:12611210

  17. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  18. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  19. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  20. Cybersecurity for aerospace autonomous systems

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.

  1. Algorithmic solution for autonomous vision-based off-road navigation

    NASA Astrophysics Data System (ADS)

    Kolesnik, Marina; Paar, Gerhard; Bauer, Arnold; Ulm, Michael

    1998-07-01

    A vision based navigation system is a basic tool to provide autonomous operations of unmanned vehicles. For offroad navigation that means that the vehicle equipped with a stereo vision system and perhaps a laser ranging device shall be able to maintain a high level of autonomy under various illumination conditions and with little a priori information about the underlying scene. The task becomes particularly important for unmanned planetary exploration with the help of autonomous rovers. For example in the LEDA Moon exploration project currently under focus by the European Space Agency (ESA), during the autonomous mode the vehicle (rover) should perform the following operations: on-board absolute localization, elevation model (DEM) generation, obstacle detection and relative localization, global path planning and execution. Focus of this article is a computational solution for fully autonomous path planning and path execution. An operational DEM generation method based on stereoscopy is introduced. Self-localization on the DEM and robust natural feature tracking are used as basic navigation steps, supported by inertial sensor systems. The following operations are performed on the basis of stereo image sequences: 3D scene reconstruction, risk map generation, local path planning, camera position update during the motion on the basis of landmarks tracking, obstacle avoidance. Experimental verification is done with the help of a laboratory terrain mockup and a high precision camera mounting device. It is shown that standalone tracking using automatically identified landmarks is robust enough to give navigation data for further stereoscopic reconstruction of the surrounding terrain. Iterative tracking and reconstruction leads to a complete description of the vehicle path and its surrounding with an accuracy high enough to meet the specifications for autonomous outdoor navigation.

  2. Autonomous power system brassboard

    NASA Astrophysics Data System (ADS)

    Merolla, Anthony

    1992-10-01

    The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the

  3. Autonomous power system brassboard

    NASA Technical Reports Server (NTRS)

    Merolla, Anthony

    1992-01-01

    The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the

  4. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  5. Asteroid Exploration with Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Rash, James; Rouff, Christopher; Hinchey, Mike

    2004-01-01

    NASA is studying advanced technologies for a future robotic exploration mission to the asteroid belt. The prospective ANTS (Autonomous Nano Technology Swarm) mission comprises autonomous agents including worker agents (small spacecra3) designed to cooperate in asteroid exploration under the overall authoriq of at least one ruler agent (a larger spacecraft) whose goal is to cause science data to be returned to Earth. The ANTS team (ruler plus workers and messenger agents), but not necessarily any individual on the team, will exhibit behaviors that qualify it as an autonomic system, where an autonomic system is defined as a system that self-reconfigures, self-optimizes, self-heals, and self-protects. Autonomic system concepts lead naturally to realistic, scalable architectures rich in capabilities and behaviors. In-depth consideration of a major mission like ANTS in terms of autonomic systems brings new insights into alternative definitions of autonomic behavior. This paper gives an overview of the ANTS mission and discusses the autonomic properties of the mission.

  6. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  7. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Anderson, Charles H.; Matthies, Larry H.

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  8. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  9. vSLAM: vision-based SLAM for autonomous vehicle navigation

    NASA Astrophysics Data System (ADS)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  10. Autonomous power system: Integrated scheduling

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control and scheduling techniques to space power distribution hardware. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis, isolation, and recovery (FDIR), the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space-based power system. Faults can be introduced into the Brassboard and in turn, be diagnosed and corrected by APEX and AIPS. The Autonomous Intelligent Power Scheduler controls the execution of loads attached to the Brassboard. Each load must be executed in a manner that efficiently utilizes available power and satisfies all load, resource, and temporal constraints. In the case of a fault situation on the Brassboard, AIPS dynamically modifies the existing schedule in order to resume efficient operation conditions. A database is kept of the power demand, temporal modifiers, priority of each load, and the power level of each source. AIPS uses a set of heuristic rules to assign start times and resources to each load based on load and resource constraints. A simple improvement engine based upon these heuristics is also available to improve the schedule efficiency. This paper describes the operation of the Autonomous Intelligent Power Scheduler as a single entity, as well as its integration with APEX and the Brassboard. Future plans are discussed for the growth of the Autonomous Intelligent Power Scheduler.

  11. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  12. Progress towards autonomous, intelligent systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry; Heer, Ewald

    1987-01-01

    An aggressive program has been initiated to develop, integrate, and implement autonomous systems technologies starting with today's expert systems and evolving to autonomous, intelligent systems by the end of the 1990s. This program includes core technology developments and demonstration projects for technology evaluation and validation. This paper discusses key operational frameworks in the content of systems autonomy applications and then identifies major technological challenges, primarily in artificial intelligence areas. Program content and progress made towards critical technologies and demonstrations that have been initiated to achieve the required future capabilities in the year 2000 era are discussed.

  13. Gas House Autonomous System Monitoring

    NASA Technical Reports Server (NTRS)

    Miller, Luke; Edsall, Ashley

    2015-01-01

    Gas House Autonomous System Monitoring (GHASM) will employ Integrated System Health Monitoring (ISHM) of cryogenic fluids in the High Pressure Gas Facility at Stennis Space Center. The preliminary focus of development incorporates the passive monitoring and eventual commanding of the Nitrogen System. ISHM offers generic system awareness, adept at using concepts rather than specific error cases. As an enabler for autonomy, ISHM provides capabilities inclusive of anomaly detection, diagnosis, and abnormality prediction. Advancing ISHM and Autonomous Operation functional capabilities enhances quality of data, optimizes safety, improves cost effectiveness, and has direct benefits to a wide spectrum of aerospace applications.

  14. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    NASA Astrophysics Data System (ADS)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  15. VISION Digital Video Library System.

    ERIC Educational Resources Information Center

    Rusk, Michael D.

    2001-01-01

    Describes the VISION Digital Library System, a project implemented by the University of Kansas that uses locally developed applications to segment and automatically index video clips. Explains that the focus of VISION is to make possible the gathering and indexing of large amounts of video material, storing material on a database system, and…

  16. Autonomous navigation system for the Marsokhod rover project

    NASA Technical Reports Server (NTRS)

    Proy, C.; Lamboley, M.; Rastel, L.

    1994-01-01

    This paper presents a general overview of the Marsokhod rover mission. The autonomous navigation for a Mars exploration rover is controlled by a vision system which has been developed on the basis of two CCD cameras, stereovision and path planning algorithms. Its performances have been tested on a Mars-like experimentation site.

  17. Contingency Software in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn; Patterson-Hine, Ann

    2006-01-01

    This viewgraph presentation reviews the development of contingency software for autonomous systems. Autonomous vehicles currently have a limited capacity to diagnose and mitigate failures. There is a need to be able to handle a broader range of contingencies. The goals of the project are: 1. Speed up diagnosis and mitigation of anomalous situations.2.Automatically handle contingencies, not just failures.3.Enable projects to select a degree of autonomy consistent with their needs and to incrementally introduce more autonomy.4.Augment on-board fault protection with verified contingency scripts

  18. Intelligent, autonomous systems in space

    NASA Technical Reports Server (NTRS)

    Lum, H.; Heer, E.

    1988-01-01

    The Space Station is expected to be equipped with intelligent, autonomous capabilities; to achieve and incorporate these capabilities, the required technologies need to be identitifed, developed and validated within realistic application scenarios. The critical technologies for the development of intelligent, autonomous systems are discussed in the context of a generalized functional architecture. The present state of this technology implies that it be introduced and applied in an evolutionary process which must start during the Space Station design phase. An approach is proposed to accomplish design information acquisition and management for knowledge-base development.

  19. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  20. Autonomous spacecraft landing through human pre-attentive vision.

    PubMed

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E

    2012-06-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting. PMID:22617300

  1. Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball

    PubMed Central

    Lin, Junqin; Han, Baoling; Luo, Qingsheng

    2015-01-01

    This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. PMID:26057040

  2. Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball.

    PubMed

    Lin, Junqin; Han, Baoling; Luo, Qingsheng

    2015-01-01

    This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. PMID:26057040

  3. Semi autonomous mine detection system

    SciTech Connect

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIK was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.

  4. Stereo-vision framework for autonomous vehicle guidance and collision avoidance

    NASA Astrophysics Data System (ADS)

    Scott, Douglas A.

    2003-08-01

    During a pre-programmed course to a particular destination, an autonomous vehicle may potentially encounter environments that are unknown at the time of operation. Some regions may contain objects or vehicles that were not anticipated during the mission-planning phase. Often user-intervention is not possible or desirable under these circumstances. Thus it is required for the onboard navigation system to automatically make short-term adjustments to the flight plan and to apply the necessary course corrections. A suitable path is visually navigated through the environment to reliably avoid obstacles without significant deviations from the original course. This paper describes a general low-cost stereo-vision sensor framework, for passively estimating the range-map between a forward-looking autonomous vehicle and its environment. Typical vehicles may be either unmanned ground or airborne vehicles. The range-map image describes a relative distance from the vehicle to the observed environment and contains information that could be used to compute a navigable flight plan, and also visual and geometric detail about the environment for other onboard processes or future missions. Aspects relating to information flow through the framework are discussed, along with issues such as robustness, implementation and other advantages and disadvantages of the framework. An outline of the physical structure of the system is presented and an overview of the algorithms and applications of the framework are given.

  5. Knowledge acquisition for autonomous systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry; Heer, Ewald

    1988-01-01

    Knowledge-based capabilities for autonomous aerospace systems, such as the NASA Space Station, must encompass conflict-resolution functions comparable to those of human operators, with all elements of the system working toward system goals in a concurrent, asynchronous-but-coordinated fashion. Knowledge extracted from a design database will support robotic systems by furnishing geometric, structural, and causal descriptions required for repair, disassembly, and assembly. The factual knowledge for these databases will be obtained from a master database through a technical management information system, and it will in many cases have to be augmented by domain-specific heuristic knowledge acquired from domain experts.

  6. Expert system modeling of a vision system

    NASA Astrophysics Data System (ADS)

    Reihani, Kamran; Thompson, Wiley E.

    1992-05-01

    The proposed artificial intelligence-based vision model incorporates natural recognition processes depicted as a visual pyramid and hierarchical representation of objects in the database. The visual pyramid, with based and apex representing pixels and image, respectively, is used as an analogy for a vision system. This paper provides an overview of recognition activities and states in the framework of an inductive model. Also, it presents a natural vision system and a counterpart expert system model that incorporates the described operations.

  7. Multi-agent autonomous system

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.

  8. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; Doggett, Thomas; Ip, Felipe; Greeley, Ron; Baker, Victor; Dohn, James; Boyer, Darrell

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  9. A vision system for an unmanned nonlethal weapon

    NASA Astrophysics Data System (ADS)

    Kogut, Greg; Drymon, Larry

    2004-10-01

    Unmanned weapons remove humans from deadly situations. However some systems, such as unmanned guns, are difficult to control remotely. It is difficult for a soldier to perform the complex tasks of identifying and aiming at specific points on targets from a remote location. This paper describes a computer vision and control system for providing autonomous control of unmanned guns developed at Space and Naval Warfare Systems Center, San Diego (SSC San Diego). The test platform, consisting of a non-lethal gun mounted on a pan-tilt mechanism, can be used as an unattended device or mounted on a robot for mobility. The system operates with a degree of autonomy determined by a remote user that ranges from teleoperated to fully autonomous. The teleoperated mode consists of remote joystick control over all aspects of the weapon, including aiming, arming, and firing. Visual feedback is provided by near-real-time video feeds from bore-site and wide-angle cameras. The semi-autonomous mode provides the user with tracking information overlayed over the real-time video. This provides the user with information on all detected targets being tracked by the vision system. The user uses a mouse to select a target, and the gun automatically aims the gun at the target. Arming and firing is still performed by teleoperation. In fully autonomous mode, all aspects of gun control are performed by the vision system.

  10. The Autonomous Pathogen Detection System

    SciTech Connect

    Dzenitis, J M; Makarewicz, A J

    2009-01-13

    We developed, tested, and now operate a civilian biological defense capability that continuously monitors the air for biological threat agents. The Autonomous Pathogen Detection System (APDS) collects, prepares, reads, analyzes, and reports results of multiplexed immunoassays and multiplexed PCR assays using Luminex{copyright} xMAP technology and flow cytometer. The mission we conduct is particularly demanding: continuous monitoring, multiple threat agents, high sensitivity, challenging environments, and ultimately extremely low false positive rates. Here, we introduce the mission requirements and metrics, show the system engineering and analysis framework, and describe the progress to date including early development and current status.

  11. Coherent laser vision system (CLVS)

    SciTech Connect

    1997-02-13

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometric data on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Base Program Interim Phase Topical Report presenting the results of Phase 1 of the CLVS research project. Test results and demonstration results provide a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometric data on the order of once per second.

  12. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  13. APDS: Autonomous Pathogen Detection System

    SciTech Connect

    Langlois, R G; Brown, S; Burris, L; Colston, B; Jones, L; Makarewicz, T; Mariella, R; Masquelier, D; McBride, M; Milanovich, F; Masarabadi, S; Venkateswaran, K; Marshall, G; Olson, D; Wolcott, D

    2002-02-14

    An early warning system to counter bioterrorism, the Autonomous Pathogen Detection System (APDS) continuously monitors the environment for the presence of biological pathogens (e.g., anthrax) and once detected, it sounds an alarm much like a smoke detector warns of a fire. Long before September 11, 2001, this system was being developed to protect domestic venues and events including performing arts centers, mass transit systems, major sporting and entertainment events, and other high profile situations in which the public is at risk of becoming a target of bioterrorist attacks. Customizing off-the-shelf components and developing new components, a multidisciplinary team developed APDS, a stand-alone system for rapid, continuous monitoring of multiple airborne biological threat agents in the environment. The completely automated APDS samples the air, prepares fluid samples in-line, and performs two orthogonal tests: immunoassay and nucleic acid detection. When compared to competing technologies, APDS is unprecedented in terms of flexibility and system performance.

  14. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  15. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  16. Testbed for an autonomous system

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok K.; Larsen, Ronald L.

    1989-01-01

    In previous works we have defined a general architectural model for autonomous systems, which can easily be mapped to describe the functions of any automated system (SDAG-86-01), and we illustrated that model by applying it to the thermal management system of a space station (SDAG-87-01). In this note, we will further develop that application and design the detail of the implementation of such a model. First we present the environment of our application by describing the thermal management problem and an abstraction, which was called TESTBED, that includes a specific function for each module in the architecture, and the nature of the interfaces between each pair of blocks.

  17. An Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Bull, James B.; Lanzi, Raymond J.

    2007-01-01

    The Autonomous Flight Safety System (AFSS) being developed by NASA s Goddard Space Flight Center s Wallops Flight Facility and Kennedy Space Center has completed two successful developmental flights and is preparing for a third. AFSS has been demonstrated to be a viable architecture for implementation of a completely vehicle based system capable of protecting life and property in event of an errant vehicle by terminating the flight or initiating other actions. It is capable of replacing current human-in-the-loop systems or acting in parallel with them. AFSS is configured prior to flight in accordance with a specific rule set agreed upon by the range safety authority and the user to protect the public and assure mission success. This paper discusses the motivation for the project, describes the method of development, and presents an overview of the evolving architecture and the current status.

  18. Research in computer vision for autonomous systems

    NASA Astrophysics Data System (ADS)

    Kak, Avi; Yoder, Mark; Andress, Keith; Blask, Steve; Underwood, Tom

    1988-09-01

    This report addresses FLIR processing, LADAR processing and electronic terrain board modeling. In our discussion on FLIR processing, issues were analyzed for classifiability of FLIR features, computationally efficient algorithms for target segmentation, metrics, etc. The discussion on LADAR includes a comparison of a number of different approaches to the segmentation of target surfaces from range images, extraction of silhouettes at different ranges, and reasoning strategies for the recognition of targets and estimation of their aspects. Regarding electronic terrain board modeling, it was shown how the readily available wire-frame data for strategic targets can be converted into volumetric models utilizing the concepts of constructive solid geometry; then is was shown how from the resulting volumetric models it is possible to generate synthetic range images that are very similar to real LADAR images. Also shown is how sensor noise can be added to these synthetic images to make them even more realistic.

  19. Information for Successful Interaction with Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Johnson, Kathy A.

    2003-01-01

    Interaction in heterogeneous mission operations teams is not well matched to classical models of coordination with autonomous systems. We describe methods of loose coordination and information management in mission operations. We describe an information agent and information management tool suite for managing information from many sources, including autonomous agents. We present an integrated model of levels of complexity of agent and human behavior, which shows types of information processing and points of potential error in agent activities. We discuss the types of information needed for diagnosing problems and planning interactions with an autonomous system. We discuss types of coordination for which designs are needed for autonomous system functions.

  20. A stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Kiraly, Zsolt

    In this investigation an optical system is introduced that is suitable for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system transmits wirelessly stereoscopic (three-dimensional) video to a computer which displays the video on the screen where it can be viewed with shutter glasses. To minimize space requirements, the video from the two cameras (required to produce stereoscopic images) is multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms were developed that enable the system to perform these tasks. A proof-of-concept device was constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests were performed validating the concepts and the algorithms.

  1. Stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Király, Zsolt; Springer, George S.; Van Dam, Jacques

    2006-04-01

    In this investigation, an optical system is introduced for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system wirelessly transmits stereoscopic video to a computer that displays the video in realtime on the screen, where it is viewed with shutter glasses. To minimize space requirements, the videos from the two cameras (required to produce stereoscopic images) are multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms are developed that enable the system to perform these tasks. A proof-of-concept device is constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests are performed assessing validities of the concepts and the algorithms.

  2. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  3. Autonomous navigation system and method

    SciTech Connect

    Bruemmer, David J; Few, Douglas A

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  4. Vision systems for manned and robotic ground vehicles

    NASA Astrophysics Data System (ADS)

    Sanders-Reed, John N.; Koon, Phillip L.

    2010-04-01

    A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.

  5. System Engineering of Autonomous Space Vehicles

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis

    2014-01-01

    Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.

  6. Autonomous intelligent cruise control system

    NASA Astrophysics Data System (ADS)

    Baret, Marc; Bomer, Thierry T.; Calesse, C.; Dudych, L.; L'Hoist, P.

    1995-01-01

    Autonomous intelligent cruise control (AICC) systems are not only controlling vehicles' speed but acting on the throttle and eventually on the brakes they could automatically maintain the relative speed and distance between two vehicles in the same lane. And more than just for comfort it appears that these new systems should improve the safety on highways. By applying a technique issued from the space research carried out by MATRA, a sensor based on a charge coupled device (CCD) was designed to acquire the reflected light on standard-mounted car reflectors of pulsed laser diodes emission. The CCD is working in a unique mode called flash during transfer (FDT) which allows identification of target patterns in severe optical environments. It provides high accuracy for distance and angular position of targets. The absence of moving mechanical parts ensures high reliability for this sensor. The large field of view and the high measurement rate give a global situation assessment and a short reaction time. Then, tracking and filtering algorithms have been developed in order to select the target, on which the equipped vehicle determines its safety distance and speed, taking into account its maneuvering and the behaviors of other vehicles.

  7. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  8. Cooperative Autonomic Management in Dynamic Distributed Systems

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Zhao, Ming; Fortes, José A. B.

    The centralized management of large distributed systems is often impractical, particularly when the both the topology and status of the system change dynamically. This paper proposes an approach to application-centric self-management in large distributed systems consisting of a collection of autonomic components that join and leave the system dynamically. Cooperative autonomic components self-organize into a dynamically created overlay network. Through local information sharing with neighbors, each component gains access to global information as needed for optimizing performance of applications. The approach has been validated and evaluated by developing a decentralized autonomic system consisting of multiple autonomic application managers previously developed for the In-VIGO grid-computing system. Using analytical results from complex random network and measurements done in a prototype system, we demonstrate the robustness, self-organization and adaptability of our approach, both theoretically and experimentally.

  9. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  10. The Secure, Transportable, Autonomous Reactor System

    SciTech Connect

    Brown, N.W.; Hassberger, J.A.; Smith, C.; Carelli, M.; Greenspan, E.; Peddicord, K.L.; Stroh, K.; Wade, D.C.; Hill, R.N.

    1999-05-27

    The Secure, Transportable, Autonomous Reactor (STAR) system is a development architecture for implementing a small nuclear power system, specifically aimed at meeting the growing energy needs of much of the developing world. It simultaneously provides very high standards for safety, proliferation resistance, ease and economy of installation, operation, and ultimate disposition. The STAR system accomplishes these objectives through a combination of modular design, factory manufacture, long lifetime without refueling, autonomous control, and high reliability.

  11. Development of an autonomous target tracking system

    NASA Astrophysics Data System (ADS)

    Gidda, Venkata Ramaiah

    In recent years, surveillance and border patrol have become one of the key research areas in UAV research. Increase in the computational capability of the computers and embedded electronics, coupled with compatibility of various commercial vision algorithms and commercial off the shelf (COTS) embedded electronics, and has further fuelled the research. The basic task in these applications is perception of environment through the available visual sensors like camera. Visual tracking, as the name implies, is tracking of objects using a camera. The process of autonomous target tracking starts with the selection of the target in a sequence of video frames transmitted from the on-board camera. We use an improved fast dynamic template matching algorithm coupled with Kalman Filter to track the selected target in consecutive video frames. The selected target is saved as a reference template. On the ground station computer, the reference template is overlaid on the live streaming video from the on-board system, starting from the upper left corner of the video frame. The template is slid pixel by pixel over the entire source image. A comparison of the pixels is performed between the template and source image. A confidence value R of the match is calculated at each pixel. Based on the method used to perform the template matching, the best match pixel location is found according to the highest or lowest confidence value R. The best match pixel location is communicated to the on-board gimbal controller over the wireless Xbee network. The software on the controller actuates the pan-tilt servos to continuously to hold the selected target at the center of the video frame. The complete system is a portable control system assembled from commercial off the shelf parts. The tracking system is tested on a target having several motion patterns.

  12. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  13. A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

    NASA Astrophysics Data System (ADS)

    Leishman, Robert C.

    Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control

  14. An introduction to autonomous control systems

    NASA Technical Reports Server (NTRS)

    Antsaklis, Panos J.; Passino, Kevin M.; Wang, S. J.

    1991-01-01

    The functions, characteristics, and benefits of autonomous control are outlined. An autonomous control functional architecture for future space vehicles that incorporates the concepts and characteristics described is presented. The controller is hierarchical, with an execution level (the lowest level), coordination level (middle level), and management and organization level (highest level). The general characteristics of the overall architecture, including those of the three levels, are explained, and an example to illustrate their functions is given. Mathematical models for autonomous systems, including 'logical' discrete event system models, are discussed. An approach to the quantitative, systematic modeling, analysis, and design of autonomous controllers is also discussed. It is a hybrid approach since it uses conventional analysis techniques based on difference and differential equations and new techniques for the analysis of the systems described with a symbolic formalism such as finite automata. Some recent results from the areas of planning and expert systems, machine learning, artificial neural networks, and the area restructurable controls are briefly outlined.

  15. Musca domestica inspired machine vision system with hyperacuity

    NASA Astrophysics Data System (ADS)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  16. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  17. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  18. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  19. Autonomous control systems - Architecture and fundamental issues

    NASA Technical Reports Server (NTRS)

    Antsaklis, P. J.; Passino, K. M.; Wang, S. J.

    1988-01-01

    A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).

  20. Autonomous underwater pipeline monitoring navigation system

    NASA Astrophysics Data System (ADS)

    Mitchell, Byrel; Mahmoudian, Nina; Meadows, Guy

    2014-06-01

    This paper details the development of an autonomous motion-control and navigation algorithm for an underwater autonomous vehicle, the Ocean Server IVER3, to track long linear features such as underwater pipelines. As part of this work, the Nonlinear and Autonomous Systems Laboratory (NAS Lab) developed an algorithm that utilizes inputs from the vehicles state of the art sensor package, which includes digital imaging, digital 3-D Sidescan Sonar, and Acoustic Doppler Current Profilers. The resulting algorithms should tolerate real-world waterway with episodic strong currents, low visibility, high sediment content, and a variety of small and large vessel traffic.

  1. Precise calibration of binocular vision system used for vision measurement.

    PubMed

    Cui, Yi; Zhou, Fuqiang; Wang, Yexin; Liu, Liu; Gao, He

    2014-04-21

    Binocular vision calibration is of great importance in 3D machine vision measurement. With respect to binocular vision calibration, the nonlinear optimization technique is a crucial step to improve the accuracy. The existing optimization methods mostly aim at minimizing the sum of reprojection errors for two cameras based on respective 2D image pixels coordinate. However, the subsequent measurement process is conducted in 3D coordinate system which is not consistent with the optimization coordinate system. Moreover, the error criterion with respect to optimization and measurement is different. The equal pixel distance error in 2D image plane leads to diverse 3D metric distance error at different position before the camera. To address these issues, we propose a precise calibration method for binocular vision system which is devoted to minimizing the metric distance error between the reconstructed point through optimal triangulation and the ground truth in 3D measurement coordinate system. In addition, the inherent epipolar constraint and constant distance constraint are combined to enhance the optimization process. To evaluate the performance of the proposed method, both simulative and real experiments have been carried out and the results show that the proposed method is reliable and efficient to improve measurement accuracy compared with conventional method. PMID:24787804

  2. Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments

    NASA Astrophysics Data System (ADS)

    Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.

    2011-03-01

    Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve

  3. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  4. Autonomous Attitude Determination System (AADS). Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Saralkar, K.; Frenkel, Y.; Klitsch, G.; Liu, K. S.; Lefferts, E.; Tasaki, K.; Snow, F.; Garrahan, J.

    1982-01-01

    Information necessary to understand the Autonomous Attitude Determination System (AADS) is presented. Topics include AADS requirements, program structure, algorithms, and system generation and execution.

  5. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... Transportation (DOT). ACTION: Notice of RTCA Special Committee 213, Enhanced Flight Vision/ Synthetic Vision... meeting of RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS)....

  6. Concurrent algorithms for a mobile robot vision system

    SciTech Connect

    Jones, J.P.; Mann, R.C.

    1988-01-01

    The application of computer vision to mobile robots has generally been hampered by insufficient on-board computing power. The advent of VLSI-based general purpose concurrent multiprocessor systems promises to give mobile robots an increasing amount of on-board computing capability, and to allow computation intensive data analysis to be performed without high-bandwidth communication with a remote system. This paper describes the integration of robot vision algorithms on a 3-dimensional hypercube system on-board a mobile robot developed at Oak Ridge National Laboratory. The vision system is interfaced to navigation and robot control software, enabling the robot to maneuver in a laboratory environment, to find a known object of interest and to recognize the object's status based on visual sensing. We first present the robot system architecture and the principles followed in the vision system implementation. We then provide some benchmark timings for low-level image processing routines, describe a concurrent algorithm with load balancing for the Hough transform, a new algorithm for binary component labeling, and an algorithm for the concurrent extraction of region features from labeled images. This system analyzes a scene in less than 5 seconds and has proven to be a valuable experimental tool for research in mobile autonomous robots. 9 refs., 1 fig., 3 tabs.

  7. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. PMID:19651459

  8. Sensorpedia: Information Sharing Across Autonomous Sensor Systems

    SciTech Connect

    Gorman, Bryan L; Resseguie, David R; Tomkins-Tinch, Christopher H

    2009-01-01

    The concept of adapting social media technologies is introduced as a means of achieving information sharing across autonomous sensor systems. Historical examples of interoperability as an underlying principle in loosely-coupled systems is compared and contrasted with corresponding tightly-coupled, integrated systems. Examples of ad hoc information sharing solutions based on Web 2.0 social networks, mashups, blogs, wikis, and data tags are presented and discussed. The underlying technologies of these solutions are isolated and defined, and Sensorpedia is presented as a formalized application for implementing sensor information sharing across large-scale enterprises with incompatible autonomous sensor systems.

  9. Part identification in robotic assembly using vision system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system acts an important role in making robotic assembly system autonomous. Identification of the correct part is an important task which needs to be carefully done by a vision system to feed the robot with correct information for further processing. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Interest point detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus it needs to choose the correct tool for the process with respect to the given environment. In this paper analysis of three major corner detection algorithms is performed on the basis of their accuracy, speed and robustness to noise. The work is performed on the Matlab R2012a. An attempt has been made to find the best algorithm for the problem.

  10. Vision guided landing of an an autonomous helicopter in hazardous terrain

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Montgomery, Jim

    2005-01-01

    Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.

  11. Development of a vision system for an intelligent ground vehicle

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth; Stone, Robert B.; McAdams, Daniel A.

    2009-01-01

    The development of a vision system for an autonomous ground vehicle designed and constructed for the Intelligent Ground Vehicle Competition (IGVC) is discussed. The requirements for the vision system of the autonomous vehicle are explored via functional analysis considering the flows (materials, energies and signals) into the vehicle and the changes required of each flow within the vehicle system. Functional analysis leads to a vision system based on a laser range finder (LIDAR) and a camera. Input from the vision system is processed via a ray-casting algorithm whereby the camera data and the LIDAR data are analyzed as a single array of points representing obstacle locations, which for the IGVC, consist of white lines on the horizontal plane and construction markers on the vertical plane. Functional analysis also leads to a multithreaded application where the ray-casting algorithm is a single thread of the vehicle's software, which consists of multiple threads controlling motion, providing feedback, and processing the data from the camera and LIDAR. LIDAR data is collected as distances and angles from the front of the vehicle to obstacles. Camera data is processed using an adaptive threshold algorithm to identify color changes within the collected image; the image is also corrected for camera angle distortion, adjusted to the global coordinate system, and processed using least-squares method to identify white boundary lines. Our IGVC robot, MAX, is utilized as the continuous example for all methods discussed in the paper. All testing and results provided are based on our IGVC robot, MAX, as well.

  12. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions.

    PubMed

    Clayton, Byron C

    2014-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406

  13. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions

    PubMed Central

    Clayton, Byron C.

    2015-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406

  14. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  15. Far and proximity maneuvers of a constellation of service satellites and autonomous pose estimation of customer satellite using machine vision

    NASA Astrophysics Data System (ADS)

    Arantes, Gilberto, Jr.; Marconi Rocco, Evandro; da Fonseca, Ijar M.; Theil, Stephan

    2010-05-01

    Space robotics has a substantial interest in achieving on-orbit satellite servicing operations autonomously, e.g. rendezvous and docking/berthing (RVD) with customer and malfunctioning satellites. An on-orbit servicing vehicle requires the ability to estimate the position and attitude in situations whenever the targets are uncooperative. Such situation comes up when the target is damaged. In this context, this work presents a robust autonomous pose system applied to RVD missions. Our approach is based on computer vision, using a single camera and some previous knowledge of the target, i.e. the customer spacecraft. A rendezvous analysis mission tool for autonomous service satellite has been developed and presented, for far maneuvers, e.g. distance above 1 km from the target, and close maneuvers. The far operations consist of orbit transfer using the Lambert formulation. The close operations include the inspection phase (during which the pose estimation is computed) and the final approach phase. Our approach is based on the Lambert problem for far maneuvers and the Hill equations are used to simulate and analyze the approaching and final trajectory between target and chase during the last phase of the rendezvous operation. A method for optimally estimating the relative orientation and position between camera system and target is presented in detail. The target is modelled as an assembly of points. The pose of the target is represented by dual quaternion in order to develop a simple quadratic error function in such a way that the pose estimation task becomes a least square minimization problem. The problem of pose is solved and some methods of non-linear square optimization (Newton, Newton-Gauss, and Levenberg-Marquard) are compared and discussed in terms of accuracy and computational cost.

  16. Designing vision systems for robotic applications

    SciTech Connect

    Trivedi, M.M.

    1988-01-01

    Intelligent robotic systems utilize sensory information to perceive the nature of their work environment. Of the many sensor modalities, vision is recognized as one of the most important and cost-effective sensors utilized in practical systems. In this paper, we address the problem of designing vision systems to perform a variety of robotic inspection and manipulation tasks. We describe the nature and characteristics of the robotic task domain and discuss the computational hierarchy governing the process of scene interpretation. We also present a case study illustrating the design of a specific vision system developed for performing inspection and manipulation tasks associated with a control panel. 27 refs., 6 figs.

  17. Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround

    NASA Astrophysics Data System (ADS)

    Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.

    An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.

  18. Advanced Autonomous Systems for Space Operations

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Muscettola, N.; Barrett, A.; Mjolssness, E.; Clancy, D. J.

    2002-01-01

    New missions of exploration and space operations will require unprecedented levels of autonomy to successfully accomplish their objectives. Inherently high levels of complexity, cost, and communication distances will preclude the degree of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of not only meeting the greatly increased space exploration requirements, but simultaneously dramatically reducing the design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health management capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of advanced space operations, since the science and operational requirements specified by such missions, as well as the budgetary constraints will limit the current practice of monitoring and controlling missions by a standing army of ground-based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such on-board systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communication` distances as are not

  19. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  20. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  1. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  2. Advances in autonomous systems for space exploration missions

    NASA Technical Reports Server (NTRS)

    Smith, B. D.; Gross, A. R.; Clancy, D. J.; Cannon, H. N.; Barrett, A.; Mjolssness, E.; Muscettola, N.; Chien, S.; Johnson, A.

    2001-01-01

    This paper focuses on new and innovative software for remote, autonomous, space systems flight operation, including distributed autonomous systems, flight test results, and implications and directions for future systems.

  3. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  4. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  5. Comparative anatomy of the autonomic nervous system.

    PubMed

    Nilsson, Stefan

    2011-11-16

    This short review aims to point out the general anatomical features of the autonomic nervous systems of non-mammalian vertebrates. In addition it attempts to outline the similarities and also the increased complexity of the autonomic nervous patterns from fish to tetrapods. With the possible exception of the cyclostomes, perhaps the most striking feature of the vertebrate autonomic nervous system is the similarity between the vertebrate classes. An evolution of the complexity of the system can be seen, with the segmental ganglia of elasmobranchs incompletely connected longitudinally, while well developed paired sympathetic chains are present in teleosts and the tetrapods. In some groups the sympathetic chains may be reduced (dipnoans and caecilians), and have yet to be properly described in snakes. Cranial autonomic pathways are present in the oculomotor (III) and vagus (X) nerves of gnathostome fish and the tetrapods, and with the evolution of salivary and lachrymal glands in the tetrapods, also in the facial (VII) and glossopharyngeal (IX) nerves. PMID:20444653

  6. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators. PMID:25286349

  7. Control problems in Autonomous Life Support Systems

    NASA Technical Reports Server (NTRS)

    Colombano, S. P.; Schwartzkopf, S. H.; Macelroy, R. D.

    1981-01-01

    Autonomous Life Support Systems (ALSS) are envisioned for long range permanence in space. ALSS would require little or no input of matter for extended periods of time. The design of such a system involves an understanding of both ecological principles and control theory of nonlinear, ill-defined systems. A distinction is drawn between ecosystem survival strategies and the aims of control theory. Experimental work is under way to help combine the two approaches.

  8. Development Of A Vision Guided Robot System

    NASA Astrophysics Data System (ADS)

    Torfeh-Isfahani, Mohammad; Yeung, Kim F.

    1987-10-01

    This paper presents the development of an intelligent vision guided system through the integration of a vision system into a robot. Systems like the one described in this paper are able to work alone. They can be used in many automated assembly operations. Such systems can do repetitive tasks more efficiently and accurately than human operators because of the immunity of machines to human factors such as boredom, fatigue, and stress. In order to better understand the capabilities of such systems, this paper will highlight what can be accomplished by such systems by detailing the development of such a system. This system is already built and functional.

  9. Development of a Commercially Viable, Modular Autonomous Robotic Systems for Converting any Vehicle to Autonomous Control

    NASA Technical Reports Server (NTRS)

    Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.

    1994-01-01

    A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.

  10. On the architecture of the micro machine vision system

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Wang, Xiaohao; Zhou, Zhaoying; Zong, Guanghua

    2006-01-01

    Micro machine vision system is an important part of a micromanipulating system which has been used widely in many fields. As the research activities on the micromanipulating system go deeper, micro machine vision system catches more attention. In this paper, micro machine vision system is treated as a kind of machine vision system with constrains and characteristics introduced by specific application environment. Unlike the traditional machine vision system, a micro machine vision system usually does not aim at the reconstruction of the scene. It is introduced to obtain expected position information so that the manipulation can be accomplished accurately. The architecture of the micro machine vision system is proposed. The key issues related to a micro machine vision system such as system layout, optical imaging device and vision system calibration are discussed to explain the proposed architecture further. A task-oriented micro machine vision system for biological micromanipulating system is shown as an example, which is in compliance with the proposed architecture.

  11. Autonomous Flight Safety System - Phase III

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.

  12. Autonomous microexplosives subsurface tracing system final report.

    SciTech Connect

    Engler, Bruce Phillip; Nogan, John; Melof, Brian Matthew; Uhl, James Eugene; Dulleck, George R., Jr.; Ingram, Brian V.; Grubelich, Mark Charles; Rivas, Raul R.; Cooper, Paul W.; Warpinski, Norman Raymond; Kravitz, Stanley H.

    2004-04-01

    The objective of the autonomous micro-explosive subsurface tracing system is to image the location and geometry of hydraulically induced fractures in subsurface petroleum reservoirs. This system is based on the insertion of a swarm of autonomous micro-explosive packages during the fracturing process, with subsequent triggering of the energetic material to create an array of micro-seismic sources that can be detected and analyzed using existing seismic receiver arrays and analysis software. The project included investigations of energetic mixtures, triggering systems, package size and shape, and seismic output. Given the current absence of any technology capable of such high resolution mapping of subsurface structures, this technology has the potential for major impact on petroleum industry, which spends approximately $1 billion dollar per year on hydraulic fracturing operations in the United States alone.

  13. Compact Through-The-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Gutow, David A.

    1992-01-01

    Changes in gas/tungsten-arc welding torch equipped with through-the-torch vision system make it smaller and more resistant to welding environment. Vision subsystem produces image of higher quality, flow of gas enhanced, and parts replaced quicker and easier. Coaxial series of lenses and optical components provides overhead view of joint and weld puddle real-time control. Designed around miniature high-resolution video camera. Smaller size enables torch to weld joints formerly inaccessible.

  14. Autonomous system for cross-country navigation

    NASA Astrophysics Data System (ADS)

    Stentz, Anthony; Brumitt, Barry L.; Coulter, R. C.; Kelly, Alonzo

    1993-05-01

    Autonomous cross-country navigation is essential for outdoor robots moving about in unstructured environments. Most existing systems use range sensors to determine the shape of the terrain, plan a trajectory that avoids obstacles, and then drive the trajectory. Performance has been limited by the range and accuracy of sensors, insufficient vehicle-terrain interaction models, and the availability of high-speed computers. As these elements improve, higher- speed navigation on rougher terrain becomes possible. We have developed a software system for autonomous navigation that provides for greater capability. The perception system supports a large braking distance by fusing multiple range images to build a map of the terrain in front of the vehicle. The system identifies range shadows and interpolates undersamples regions to account for rough terrain effects. The motion planner reduces computational complexity by investigating a minimum number of trajectories. Speeds along the trajectory are set to provide for dynamic stability. The entire system was tested in simulation, and a subset of the capability was demonstrated on a real vehicle. Results to date include a continuous 5.1 kilometer run across moderate terrain with obstacles. This paper begins with the applications, prior work, limitations, and current paradigms for autonomous cross-country navigation, and then describes our contribution to the area.

  15. Mission planning for autonomous systems

    NASA Technical Reports Server (NTRS)

    Pearson, G.

    1987-01-01

    Planning is a necessary task for intelligent, adaptive systems operating independently of human controllers. A mission planning system that performs task planning by decomposing a high-level mission objective into subtasks and synthesizing a plan for those tasks at varying levels of abstraction is discussed. Researchers use a blackboard architecture to partition the search space and direct the focus of attention of the planner. Using advanced planning techniques, they can control plan synthesis for the complex planning tasks involved in mission planning.

  16. A multilayer perceptron hazard detector for vision-based autonomous planetary landing

    NASA Astrophysics Data System (ADS)

    Lunghi, Paolo; Ciarambino, Marco; Lavagna, Michèle

    2016-07-01

    A hazard detection and target selection algorithm for autonomous spacecraft planetary landing, based on Artificial Neural Networks, is presented. From a single image of the landing area, acquired by a VIS camera during the descent, the system computes a hazard map, exploited to select the best target, in terms of safety, guidance constraints, and scientific interest. ANNs generalization properties allow the system to correctly operate also in conditions not explicitly considered during calibration. The net architecture design, training, verification and results are critically presented. Performances are assessed in terms of recognition accuracy and selected target safety. Results for a lunar landing scenario are discussed to highlight the effectiveness of the system.

  17. Autonomous omnidirectional spacecraft antenna system

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.

    1983-01-01

    The development of a low gain Electronically Switchable Spherical Array Antenna is discussed. This antenna provides roughly 7 dBic gain for receive/transmit operation between user satellites and the Tracking and Data Relay Satellite System. When used as a pair, the antenna provides spherical coverage. The antenna was tested in its primary operating modes: directed beam, retrodirective, and Omnidirectional.

  18. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  19. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  20. Volumetric imaging system for the ionosphere (VISION)

    NASA Astrophysics Data System (ADS)

    Dymond, Kenneth F.; Budzien, Scott A.; Nicholas, Andrew C.; Thonnard, Stefan E.; Fortna, Clyde B.

    2002-01-01

    The Volumetric Imaging System for the Ionosphere (VISION) is designed to use limb and nadir images to reconstruct the three-dimensional distribution of electrons over a 1000 km wide by 500 km high slab beneath the satellite with 10 km x 10 km x 10 km voxels. The primary goal of the VISION is to map and monitor global and mesoscale (> 10 km) electron density structures, such as the Appleton anomalies and field-aligned irregularity structures. The VISION consists of three UV limb imagers, two UV nadir imagers, a dual frequency Global Positioning System (GPS) receiver, and a coherently emitting three frequency radio beacon. The limb imagers will observe the O II 83.4 nm line (daytime electron density), O I 135.6 nm line (nighttime electron density and daytime O density), and the N2 Lyman-Birge-Hopfield (LBH) bands near 143.0 nm (daytime N2 density). The nadir imagers will observe the O I 135.6 nm line (nighttime electron density and daytime O density) and the N2 LBH bands near 143.0 nm (daytime N2 density). The GPS receiver will monitor the total electron content between the satellite containing the VISION and the GPS constellation. The three frequency radio beacon will be used with ground-based receiver chains to perform computerized radio tomography below the satellite containing the VISION. The measurements made using the two radio frequency instruments will be used to validate the VISION UV measurements.

  1. System for autonomous monitoring of bioagents

    SciTech Connect

    Langlois, Richard G.; Milanovich, Fred P.; Colston, Jr, Billy W.; Brown, Steve B.; Masquelier, Don A.; Mariella, Jr., Raymond P.; Venkateswaran, Kodomudi

    2015-06-09

    An autonomous monitoring system for monitoring for bioagents. A collector gathers the air, water, soil, or substance being monitored. A sample preparation means for preparing a sample is operatively connected to the collector. A detector for detecting the bioagents in the sample is operatively connected to the sample preparation means. One embodiment of the present invention includes confirmation means for confirming the bioagents in the sample.

  2. Autonomous grain combine control system

    SciTech Connect

    Hoskinson, Reed L.; Kenney, Kevin L.; Lucas, James R.; Prickel, Marvin A.

    2013-06-25

    A system for controlling a grain combine having a rotor/cylinder, a sieve, a fan, a concave, a feeder, a header, an engine, and a control system. The feeder of the grain combine is engaged and the header is lowered. A separator loss target, engine load target, and a sieve loss target are selected. Grain is harvested with the lowered header passing the grain through the engaged feeder. Separator loss, sieve loss, engine load and ground speed of the grain combine are continuously monitored during the harvesting. If the monitored separator loss exceeds the selected separator loss target, the speed of the rotor/cylinder, the concave setting, the engine load target, or a combination thereof is adjusted. If the monitored sieve loss exceeds the selected sieve loss target, the speed of the fan, the size of the sieve openings, or the engine load target is adjusted.

  3. Why Computer-Based Systems Should be Autonomic

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, Mike

    2005-01-01

    The objective of this paper is to discuss why computer-based systems should be autonomic, where autonomicity implies self-managing, often conceptualized in terms of being self-configuring, self-healing, self-optimizing, self-protecting and self-aware. We look at motivations for autonomicity, examine how more and more systems are exhibiting autonomic behavior, and finally look at future directions.

  4. Flight testing an integrated synthetic vision system

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream G-V aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  5. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  6. The autonomic nervous system and renal physiology

    PubMed Central

    D’Elia, John A; Weinrauch, Larry A

    2013-01-01

    Research in resistant hypertension has again focused on autonomic nervous system denervation – 50 years after it had been stopped due to postural hypotension and availability of newer drugs. These (ganglionic blockers) drugs have all been similarly stopped, due to postural hypotension and yet newer antihypertensive agents. Recent demonstration of the feasibility of limited regional transcatheter sympathetic denervation has excited clinicians due to potential therapeutic implications. Standard use of ambulatory blood pressure recording equipment may alter our understanding of the diagnosis, potential treatment strategies, and health care outcomes – when faced with patients whose office blood pressure remains in the hypertensive range – while under treatment with three antihypertensive drugs at the highest tolerable doses, plus a diuretic. We review herein clinical relationships between autonomic function, resistant hypertension, current treatment strategies, and reflect upon the possibility of changes in our approach to resistant hypertension. PMID:24039445

  7. Autonomous Flight Safety System Road Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.; Zoemer, Roger D.; Forney, Chris S.

    2005-01-01

    On February 3, 2005, Kennedy Space Center (KSC) conducted the first Autonomous Flight Safety System (AFSS) test on a moving vehicle -- a van driven around the KSC industrial area. A subset of the Phase III design was used consisting of a single computer, GPS receiver, and UPS antenna. The description and results of this road test are described in this report.AFSS is a joint KSC and Wallops Flight Facility project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations.

  8. Seizures and brain regulatory systems: Consciousness, sleep, and autonomic systems

    PubMed Central

    Sedigh-Sarvestani, Madineh; Blumenfeld, Hal; Loddenkemper, Tobias; Bateman, Lisa M

    2014-01-01

    Research into the physiological underpinnings of epilepsy has revealed reciprocal relationships between seizures and the activity of several regulatory systems in the brain, including those governing sleep, consciousness and autonomic functions. This review highlights recent progress in understanding and utilizing the relationships between seizures and the arousal or consciousness system, the sleep-wake and associated circadian system, and the central autonomic network. PMID:25233249

  9. Three-Dimensional Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1989-01-01

    Stereoscopy and motion provide clues to outlines of objects. Digital image-processing system acts as "intelligent" automatic machine-vision system by processing views from stereoscopic television cameras into three-dimensional coordinates of moving object in view. Epipolar-line technique used to find corresponding points in stereoscopic views. Robotic vision system analyzes views from two television cameras to detect rigid three-dimensional objects and reconstruct numerically in terms of coordinates of corner points. Stereoscopy and effects of motion on two images complement each other in providing image-analyzing subsystem with clues to natures and locations of principal features.

  10. Agent Technology, Complex Adaptive Systems, and Autonomic Systems: Their Relationships

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Rash, James; Rouff, Chistopher; Hincheny, Mike

    2004-01-01

    To reduce the cost of future spaceflight missions and to perform new science, NASA has been investigating autonomous ground and space flight systems. These goals of cost reduction have been further complicated by nanosatellites for future science data-gathering which will have large communications delays and at times be out of contact with ground control for extended periods of time. This paper describes two prototype agent-based systems, the Lights-out Ground Operations System (LOGOS) and the Agent Concept Testbed (ACT), and their autonomic properties that were developed at NASA Goddard Space Flight Center (GSFC) to demonstrate autonomous operations of future space flight missions. The paper discusses the architecture of the two agent-based systems, operational scenarios of both, and the two systems autonomic properties.

  11. Sustainable and Autonomic Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter

    2006-01-01

    Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.

  12. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  13. Early light vision isomorphic singular (ELVIS) system

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Ternovskiy, Igor V.; DeBacker, Theodore A.; Caulfield, H. John

    2000-07-01

    In the shallow water military scenarios, UUVs (Unmanned Underwater Vehicles) are required to protect assets against mines, swimmers, and other underwater military objects. It would be desirable if such UUVs could autonomously see in a similar way as humans, at least, at the primary visual cortex-level. In this paper, an attempt to such a UUV system development is proposed.

  14. Vision-based map building and trajectory planning to enable autonomous flight through urban environments

    NASA Astrophysics Data System (ADS)

    Watkins, Adam S.

    The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage

  15. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled. PMID:26466433

  16. Versatile 360-deg panoramic optical system for autonomous robots

    NASA Astrophysics Data System (ADS)

    Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.; Nordhauser, Sidney R.

    1999-01-01

    Autonomous mobile robots require wide angle vision for navigation and threat detection and analysis, best served with full panoramic vision. The panoramic optical element is a unique inexpensive first surface reflective aspheric convex cone. This cone can be sized and configured for any vertical FOV desired. The cone acts as a negative optical element generating a panoramic virtual image. When this virtual image is viewed through a standard camera lens it produces at the lenses focal pane a panoramic toroidal image with a translational linearity of > 99 percent. One of three image transducers can be used to convert the toroidal panoramic image to a video signal. Raster scanned CCDs, radially scanned Vidicons and linear CCD arrays on a mechanically rotated state, each have their own particular advantage. Field object distances can be determined in two ways. If the robot is moving the range can be calculated by the size change of a field object versus the distance traversed in a specific time interval. By vertically displacing the panoramic camera by several inches a quasibinocular system is created and the range determined by simple math. Ranging thus produces the third dimension.

  17. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  18. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  19. Verification of Autonomous Systems for Space Applications

    NASA Technical Reports Server (NTRS)

    Brat, G.; Denney, E.; Giannakopoulou, D.; Frank, J.; Jonsson, A.

    2006-01-01

    Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.

  20. Malicious Hubs: Detecting Abnormally Malicious Autonomous Systems

    SciTech Connect

    Kalafut, Andrew J.; Shue, Craig A; Gupta, Prof. Minaxi

    2010-01-01

    While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using 12 popular blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted and others account for large fractions of blacklisted IPs. Overall, we conclude that examining malicious activity at the AS granularity can unearth networks with lax security or those that harbor cybercrime.

  1. Physiology of the Autonomic Nervous System

    PubMed Central

    2007-01-01

    This manuscript discusses the physiology of the autonomic nervous system (ANS). The following topics are presented: regulation of activity; efferent pathways; sympathetic and parasympathetic divisions; neurotransmitters, their receptors and the termination of their activity; functions of the ANS; and the adrenal medullae. In addition, the application of this material to the practice of pharmacy is of special interest. Two case studies regarding insecticide poisoning and pheochromocytoma are included. The ANS and the accompanying case studies are discussed over 5 lectures and 2 recitation sections during a 2-semester course in Human Physiology. The students are in the first-professional year of the doctor of pharmacy program. PMID:17786266

  2. Image Control In Automatic Welding Vision System

    NASA Technical Reports Server (NTRS)

    Richardson, Richard W.

    1988-01-01

    Orientation and brightness varied to suit welding conditions. Commands from vision-system computer drive servomotors on iris and Dove prism, providing proper light level and image orientation. Optical-fiber bundle carries view of weld area as viewed along axis of welding electrode. Image processing described in companion article, "Processing Welding Images for Robot Control" (MFS-26036).

  3. Nutritional stimulation of the autonomic nervous system.

    PubMed

    Luyer, Misha D P; Habes, Quirine; van Hak, Richard; Buurman, Wim

    2011-09-14

    Disturbance of the inflammatory response in the gut is important in several clinical diseases ranging from inflammatory bowel disease to postoperative ileus. Several feedback mechanisms exist that control the inflammatory cascade and avoid collateral damage. In the gastrointestinal tract, it is of particular importance to control the immune response to maintain the balance that allows dietary uptake and utilization of nutrients on one hand, while preventing invasion of bacteria and toxins on the other hand. The process of digestion and absorption of nutrients requires a relative hyporesponsiveness of the immune cells in the gut to luminal contents which is not yet fully understood. Recently, the autonomic nervous system has been identified as an important pathway to control local and systemic inflammation and gut barrier integrity. Activation of the pathway is possible via electrical or via pharmacological interventions, but is also achieved in a physiological manner by ingestion of dietary lipids. Administration of dietary lipids has been shown to be very effective in reducing the inflammatory cascade and maintaining intestinal barrier integrity in several experimental studies. This beneficial effect of nutrition on the inflammatory response and intestinal barrier integrity opens new therapeutic opportunities for treatment of certain gastrointestinal disorders. Furthermore, this neural feedback mechanism provides more insight in the relative hyporesponsiveness of the immune cells in the gut. Here, we will discuss the regulatory function of the autonomic nervous system on the inflammatory response and gut barrier function and the potential benefit in a clinical setting. PMID:22025873

  4. Nutritional stimulation of the autonomic nervous system

    PubMed Central

    Luyer, Misha DP; Habes, Quirine; van Hak, Richard; Buurman, Wim

    2011-01-01

    Disturbance of the inflammatory response in the gut is important in several clinical diseases ranging from inflammatory bowel disease to postoperative ileus. Several feedback mechanisms exist that control the inflammatory cascade and avoid collateral damage. In the gastrointestinal tract, it is of particular importance to control the immune response to maintain the balance that allows dietary uptake and utilization of nutrients on one hand, while preventing invasion of bacteria and toxins on the other hand. The process of digestion and absorption of nutrients requires a relative hyporesponsiveness of the immune cells in the gut to luminal contents which is not yet fully understood. Recently, the autonomic nervous system has been identified as an important pathway to control local and systemic inflammation and gut barrier integrity. Activation of the pathway is possible via electrical or via pharmacological interventions, but is also achieved in a physiological manner by ingestion of dietary lipids. Administration of dietary lipids has been shown to be very effective in reducing the inflammatory cascade and maintaining intestinal barrier integrity in several experimental studies. This beneficial effect of nutrition on the inflammatory response and intestinal barrier integrity opens new therapeutic opportunities for treatment of certain gastrointestinal disorders. Furthermore, this neural feedback mechanism provides more insight in the relative hyporesponsiveness of the immune cells in the gut. Here, we will discuss the regulatory function of the autonomic nervous system on the inflammatory response and gut barrier function and the potential benefit in a clinical setting. PMID:22025873

  5. Autonomous Underwater Vehicle Magnetic Mapping System

    NASA Astrophysics Data System (ADS)

    Steigerwalt, R.; Johnson, R. M.; Trembanis, A. C.; Schmidt, V. E.; Tait, G.

    2012-12-01

    An Autonomous Underwater Vehicle (AUV) Magnetic Mapping (MM) System has been developed and tested for military munitions detection as well as pipeline locating, wreck searches, and geologic surveys in underwater environments. The system is comprised of a high sensitivity Geometrics G-880AUV cesium vapor magnetometer integrated with a Teledyne-Gavia AUV and associated Doppler enabled inertial navigation further utilizing traditional acoustic bathymetric and side scan imaging. All onboard sensors and associated electronics are managed through customized crew members to autonomously operate through the vehicles primary control module. Total field magnetic measurements are recorded with asynchronous time-stamped data logs which include position, altitude, heading, pitch, roll, and electrical current usage. Pre-planned mission information can be uploaded to the system operators to define data collection metrics including speed, height above seafloor, and lane or transect spacing specifically designed to meet data quality objectives for the survey. As a result of the AUVs modular design, autonomous navigation and rapid deployment capabilities, the AUV MM System provides cost savings over current surface vessel surveys by reducing the mobilization/demobilization effort, thus requiring less manpower for operation and reducing or eliminating the need for a surface support vessel altogether. When the system completes its mission, data can be remotely downloaded via W-LAN and exported for use in advanced signal processing platforms. Magnetic compensation software has been concurrently developed to accept electrical current measurements directly from the AUV to address distortions from permanent and induced magnetization effects on the magnetometer. Maneuver and electrical current compensation terms can be extracted from the magnetic survey missions to perform automated post-process corrections. Considerable suppression of system noise has been observed over traditional

  6. The MAP Autonomous Mission Control System

    NASA Technical Reports Server (NTRS)

    Breed, Juile; Coyle, Steven; Blahut, Kevin; Dent, Carolyn; Shendock, Robert; Rowe, Roger

    2000-01-01

    The Microwave Anisotropy Probe (MAP) mission is the second mission in NASA's Office of Space Science low-cost, Medium-class Explorers (MIDEX) program. The Explorers Program is designed to accomplish frequent, low cost, high quality space science investigations utilizing innovative, streamlined, efficient management, design and operations approaches. The MAP spacecraft will produce an accurate full-sky map of the cosmic microwave background temperature fluctuations with high sensitivity and angular resolution. The MAP spacecraft is planned for launch in early 2001, and will be staffed by only single-shift operations. During the rest of the time the spacecraft must be operated autonomously, with personnel available only on an on-call basis. Four (4) innovations will work cooperatively to enable a significant reduction in operations costs for the MAP spacecraft. First, the use of a common ground system for Spacecraft Integration and Test (I&T) as well as Operations. Second, the use of Finite State Modeling for intelligent autonomy. Third, the integration of a graphical planning engine to drive the autonomous systems without an intermediate manual step. And fourth, the ability for distributed operations via Web and pager access.

  7. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  8. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  10. Prototype Optical Correlator For Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1993-01-01

    Known and unknown images fed in electronically at high speed. Optical correlator and associated electronic circuitry developed for vision system of robotic vehicle. System recognizes features of landscape by optical correlation between input image of scene viewed by video camera on robot and stored reference image. Optical configuration is Vander Lugt correlator, in which Fourier transform of scene formed in coherent light and spatially modulated by hologram of reference image to obtain correlation.

  11. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  12. Visual Turing test for computer vision systems.

    PubMed

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-03-24

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a "visual Turing test": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question ("just-in-time truthing"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  13. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  14. Intelligent data reduction for autonomous power systems

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1988-01-01

    Since 1984 Marshall Space Flight Center was actively engaged in research and development concerning autonomous power systems. Much of the work in this domain has dealt with the development and application of knowledge-based or expert systems to perform tasks previously accomplished only through intensive human involvement. One such task is the health status monitoring of electrical power systems. Such monitoring is a manpower intensive task which is vital to mission success. The Hubble Space Telescope testbed and its associated Nickel Cadmium Battery Expert System (NICBES) were designated as the system on which the initial proof of concept for intelligent power system monitoing will be established. The key function performed by an engineer engaged in system monitoring is to analyze the raw telemetry data and identify from the whole only those elements which can be considered significant. This function requires engineering expertise on the functionality of the system, the mode of operation and the efficient and effective reading of the telemetry data. Application of this expertise to extract the significant components of the data is referred to as data reduction. Such a function possesses characteristics which make it a prime candidate for the application of knowledge-based systems' technologies. Such applications are investigated and recommendations are offered for the development of intelligent data reduction systems.

  15. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  16. The nature of the autonomic dysfunction in multiple system atrophy

    NASA Technical Reports Server (NTRS)

    Parikh, Samir M.; Diedrich, Andre; Biaggioni, Italo; Robertson, David

    2002-01-01

    The concept that multiple system atrophy (MSA, Shy-Drager syndrome) is a disorder of the autonomic nervous system is several decades old. While there has been renewed interest in the movement disorder associated with MSA, two recent consensus statements confirm the centrality of the autonomic disorder to the diagnosis. Here, we reexamine the autonomic pathophysiology in MSA. Whereas MSA is often thought of as "autonomic failure", new evidence indicates substantial persistence of functioning sympathetic and parasympathetic nerves even in clinically advanced disease. These findings help explain some of the previously poorly understood features of MSA. Recognition that MSA entails persistent, constitutive autonomic tone requires a significant revision of our concepts of its diagnosis and therapy. We will review recent evidence bearing on autonomic tone in MSA and discuss their therapeutic implications, particularly in terms of the possible development of a bionic baroreflex for better control of blood pressure.

  17. Stereoscopic Vision System For Robotic Vehicle

    NASA Technical Reports Server (NTRS)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  18. Autonomic Middleware for Automotive Embedded Systems

    NASA Astrophysics Data System (ADS)

    Anthony, Richard; Chen, Dejiu; Törngren, Martin; Scholle, Detlef; Sanfridson, Martin; Rettberg, Achim; Naseer, Tahir; Persson, Magnus; Feng, Lei

    This chapter describes DySCAS: an advanced autonomic platform-independent middleware framework for automotive embedded systems. The concepts and architecture are motivated and described in detail, focusing on the need for, and achievement of, high flexibility and automatic run-time reconfiguration. The design of the middleware is positioned with respect to the way it overcomes the specific technical, environmental, and performance challenges of the automotive domain. Self-management is achieved in terms of automatic configuration for context-aware behavior, resource-use efficiency, and self-healing to handle run-time detected faults. The self-management is governed by the use of policies distributed throughout the middleware components. The simulation techniques that have been used for extensive validation are described and some key results presented. A reference implementation is presented, illustrating the way in which the various concepts and mechanisms can be realized and orchestrated.

  19. Autonomous Formations of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Dhali, Sanjana; Joshi, Suresh M.

    2013-01-01

    Autonomous formation control of multi-agent dynamic systems has a number of applications that include ground-based and aerial robots and satellite formations. For air vehicles, formation flight ("flocking") has the potential to significantly increase airspace utilization as well as fuel efficiency. This presentation addresses two main problems in multi-agent formations: optimal role assignment to minimize the total cost (e.g., combined distance traveled by all agents); and maintaining formation geometry during flock motion. The Kuhn-Munkres ("Hungarian") algorithm is used for optimal assignment, and consensus-based leader-follower type control architecture is used to maintain formation shape despite the leader s independent movements. The methods are demonstrated by animated simulations.

  20. Autonomous Robot System for Sensor Characterization

    SciTech Connect

    David Bruemmer; Douglas Few; Frank Carney; Miles Walton; Heather Hunting; Ron Lujan

    2004-03-01

    This paper discusses an innovative application of new Markov localization techniques that combat the problem of odometry drift, allowing a novel control architecture developed at the Idaho National Engineering and Environmental Laboratory (INEEL) to be utilized within a sensor characterization facility developed at the Remote Sensing Laboratory (RSL) in Nevada. The new robotic capability provided by the INEEL will allow RSL to test and evaluate a wide variety of sensors including radiation detection systems, machine vision systems, and sensors that can detect and track heat sources (e.g. human bodies, machines, chemical plumes). By accurately moving a target at varying speeds along designated paths, the robotic solution allows the detection abilities of a wide variety of sensors to be recorded and analyzed.

  1. Mechanical deployment system on aries an autonomous mobile robot

    SciTech Connect

    Rocheleau, D.N.

    1995-12-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail.

  2. Contingency Software in Autonomous Systems: Technical Level Briefing

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Patterson-Hines, Ann

    2006-01-01

    Contingency management is essential to the robust operation of complex systems such as spacecraft and Unpiloted Aerial Vehicles (UAVs). Automatic contingency handling allows a faster response to unsafe scenarios with reduced human intervention on low-cost and extended missions. Results, applied to the Autonomous Rotorcraft Project and Mars Science Lab, pave the way to more resilient autonomous systems.

  3. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  4. Autonomous Control of Space Reactor Systems

    SciTech Connect

    Belle R. Upadhyaya; K. Zhao; S.R.P. Perillo; Xiaojia Xu; M.G. Na

    2007-11-30

    Autonomous and semi-autonomous control is a key element of space reactor design in order to meet the mission requirements of safety, reliability, survivability, and life expectancy. Interrestrial nuclear power plants, human operators are avilable to perform intelligent control functions that are necessary for both normal and abnormal operational conditions.

  5. Digital Autonomous Terminal Access Communication (DATAC) system

    NASA Astrophysics Data System (ADS)

    Novacki, Stanley M., III

    1987-05-01

    In order to accommodate the increasing number of computerized subsystems aboard today's more fuel efficient aircraft, the Boeing Co. has developed the DATAC (Digital Autonomous Terminal Access Control) bus to minimize the need for point-to-point wiring to interconnect these various systems, thereby reducing total aircraft weight and maintaining an economical flight configuration. The DATAC bus is essentially a local area network providing interconnections for any of the flight management and control systems aboard the aircraft. The task of developing a Bus Monitor Unit was broken down into four subtasks: (1) providing a hardware interface between the DATAC bus and the Z8000-based microcomputer system to be used as the bus monitor; (2) establishing a communication link between the Z8000 system and a CP/M-based computer system; (3) generation of data reduction and display software to output data to the console device; and (4) development of a DATAC Terminal Simulator to facilitate testing of the hardware and software which transfer data between the DATAC's bus and the operator's console in a near real time environment. These tasks are briefly discussed.

  6. Digital Autonomous Terminal Access Communication (DATAC) system

    NASA Technical Reports Server (NTRS)

    Novacki, Stanley M., III

    1987-01-01

    In order to accommodate the increasing number of computerized subsystems aboard today's more fuel efficient aircraft, the Boeing Co. has developed the DATAC (Digital Autonomous Terminal Access Control) bus to minimize the need for point-to-point wiring to interconnect these various systems, thereby reducing total aircraft weight and maintaining an economical flight configuration. The DATAC bus is essentially a local area network providing interconnections for any of the flight management and control systems aboard the aircraft. The task of developing a Bus Monitor Unit was broken down into four subtasks: (1) providing a hardware interface between the DATAC bus and the Z8000-based microcomputer system to be used as the bus monitor; (2) establishing a communication link between the Z8000 system and a CP/M-based computer system; (3) generation of data reduction and display software to output data to the console device; and (4) development of a DATAC Terminal Simulator to facilitate testing of the hardware and software which transfer data between the DATAC's bus and the operator's console in a near real time environment. These tasks are briefly discussed.

  7. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  8. Systems Architecture for Fully Autonomous Space Missions

    NASA Technical Reports Server (NTRS)

    Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)

    2002-01-01

    The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software

  9. Unified formalism for higher order non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2012-03-01

    This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.

  10. Closed-loop autonomous docking system

    NASA Technical Reports Server (NTRS)

    Dabney, Richard W. (Inventor); Howard, Richard T. (Inventor)

    1992-01-01

    An autonomous docking system is provided which produces commands for the steering and propulsion system of a chase vehicle used in the docking of that chase vehicle with a target vehicle. The docking system comprises a passive optical target affixed to the target vehicle and comprising three reflective areas including a central area mounted on a short post, and tracking sensor and process controller apparatus carried by the chase vehicle. The latter apparatus comprises a laser diode array for illuminating the target so as to cause light to be reflected from the reflective areas of the target; a sensor for detecting the light reflected from the target and for producing an electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal and for producing, based thereon, output signals relating to the relative range, roll, pitch, yaw, azimuth, and elevation of the chase and target vehicles; and a docking process controller, responsive to the output signals produced by the signal processor, for producing command signals for controlling the steering and propulsion system of the chase vehicle.

  11. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    NASA Astrophysics Data System (ADS)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  12. Active State Model for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Park, Han; Chien, Steve; Zak, Michail; James, Mark; Mackey, Ryan; Fisher, Forest

    2003-01-01

    The concept of the active state model (ASM) is an architecture for the development of advanced integrated fault-detection-and-isolation (FDI) systems for robotic land vehicles, pilotless aircraft, exploratory spacecraft, or other complex engineering systems that will be capable of autonomous operation. An FDI system based on the ASM concept would not only provide traditional diagnostic capabilities, but also integrate the FDI system under a unified framework and provide mechanism for sharing of information between FDI subsystems to fully assess the overall health of the system. The ASM concept begins with definitions borrowed from psychology, wherein a system is regarded as active when it possesses self-image, self-awareness, and an ability to make decisions itself, such that it is able to perform purposeful motions and other transitions with some degree of autonomy from the environment. For an engineering system, self-image would manifest itself as the ability to determine nominal values of sensor data by use of a mathematical model of itself, and selfawareness would manifest itself as the ability to relate sensor data to their nominal values. The ASM for such a system may start with the closed-loop control dynamics that describe the evolution of state variables. As soon as this model was supplemented with nominal values of sensor data, it would possess self-image. The ability to process the current sensor data and compare them with the nominal values would represent self-awareness. On the basis of self-image and self-awareness, the ASM provides the capability for self-identification, detection of abnormalities, and self-diagnosis.

  13. APDS: The Autonomous Pathogen Detection System

    SciTech Connect

    Hindson, B; Makarewicz, A; Setlur, U; Henderer, B; McBride, M; Dzenitis, J

    2004-10-04

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic-acid based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for seven days in a major U.S. transportation hub is reported.

  14. APDS: the autonomous pathogen detection system.

    PubMed

    Hindson, Benjamin J; Makarewicz, Anthony J; Setlur, Ujwal S; Henderer, Bruce D; McBride, Mary T; Dzenitis, John M

    2005-04-15

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic acid-based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for 7 days in a major U.S. transportation hub is reported. PMID:15741059

  15. Lightweight autonomous chemical identification system (LACIS)

    NASA Astrophysics Data System (ADS)

    Lozos, George; Lin, Hai; Burch, Timothy

    2012-06-01

    Smiths Detection and Intelligent Optical Systems have developed prototypes for the Lightweight Autonomous Chemical Identification System (LACIS) for the US Department of Homeland Security. LACIS is to be a handheld detection system for Chemical Warfare Agents (CWAs) and Toxic Industrial Chemicals (TICs). LACIS is designed to have a low limit of detection and rapid response time for use by emergency responders and could allow determination of areas having dangerous concentration levels and if protective garments will be required. Procedures for protection of responders from hazardous materials incidents require the use of protective equipment until such time as the hazard can be assessed. Such accurate analysis can accelerate operations and increase effectiveness. LACIS is to be an improved point detector employing novel CBRNE detection modalities that includes a militaryproven ruggedized ion mobility spectrometer (IMS) with an array of electro-resistive sensors to extend the range of chemical threats detected in a single device. It uses a novel sensor data fusion and threat classification architecture to interpret the independent sensor responses and provide robust detection at low levels in complex backgrounds with minimal false alarms. The performance of LACIS prototypes have been characterized in independent third party laboratory tests at the Battelle Memorial Institute (BMI, Columbus, OH) and indoor and outdoor field tests at the Nevada National Security Site (NNSS). LACIS prototypes will be entering operational assessment by key government emergency response groups to determine its capabilities versus requirements.

  16. Autonomous and Autonomic Systems: A Paradigm for Future Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walter F.; Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    NASA increasingly will rely on autonomous systems concepts, not only in the mission control centers on the ground, but also on spacecraft and on rovers and other assets on extraterrestrial bodies. Automomy enables not only reduced operations costs, But also adaptable goal-driven functionality of mission systems. Space missions lacking autonomy will be unable to achieve the full range of advanced mission objectives, given that human control under dynamic environmental conditions will not be feasible due, in part, to the unavoidably high signal propagation latency and constrained data rates of mission communications links. While autonomy cost-effectively supports accomplishment of mission goals, autonomicity supports survivability of remote mission assets, especially when human tending is not feasible. Autonomic system properties (which ensure self-configuring, self-optimizing self-healing, and self-protecting behavior) conceptually may enable space missions of a higher order into any previously flown. Analysis of two NASA agent-based systems previously prototyped, and of a proposed future mission involving numerous cooperating spacecraft, illustrates how autonomous and autonomic system concepts may be brought to bear on future space missions.

  17. Vision-aided inertial navigation system for robotic mobile mapping

    NASA Astrophysics Data System (ADS)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  18. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  19. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  20. Ball stud inspection system using machine vision.

    PubMed

    Shin, Dongik; Han, Changsoo; Moon, Young Shik

    2002-01-01

    In this paper, a vision-based inspection system that measures the dimensions of a ball stud is designed and implemented. The system acquires silhouetted images by backlighting and extracts the outlines of the nearly dichotomized images in subpixel accuracy. The sets of boundary data are modeled with reasonable geometric primitives and the parameters of the models are estimated in a manner that minimizes error. Jig-fixtures and servo systems for the inspection are also contrived. The system rotates an inspected object to recognize the objects in space not on a plane. The system moves the object vertically so that it may take several pictures of different parts of the object, resulting in improvement of measuring resolution. The performance of the system is evaluated by measurement of the dimensions of a standard ball, a standard cylinder, and a ball stud. PMID:12014800

  1. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm. PMID:19901450

  2. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... Corporation of Mountain View, California (collectively ``complainants''). 74 FR 34589-90 (July 16, 2009). The... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing... importation of certain machine vision software, machine vision systems, or products containing same by...

  3. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  4. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  5. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  6. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  7. Enhanced vision system for laparoscopic surgery.

    PubMed

    Tamadazte, Brahim; Fiard, Gaelle; Long, Jean-Alexandre; Cinquin, Philippe; Voros, Sandrine

    2013-01-01

    Laparoscopic surgery offers benefits to the patients but poses new challenges to the surgeons, including a limited field of view. In this paper, we present an innovative vision system that can be combined with a traditional laparoscope, and provides the surgeon with a global view of the abdominal cavity, bringing him or her closer to open surgery conditions. We present our first experiments performed on a testbench mimicking a laparoscopic setup: they demonstrate an important time gain in performing a complex task consisting bringing a thread into the field of view of the laparoscope. PMID:24111032

  8. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  9. The Autonomous Pathogen Detection System (APDS)

    SciTech Connect

    Morris, J; Dzenitis, J

    2004-09-22

    Shaped like a mailbox on wheels, it's been called a bioterrorism ''smoke detector.'' It can be found in transportation hubs such as airports and subways, and it may be coming to a location near you. Formally known as the Autonomous Pathogen Detection System, or APDS, this latest tool in the war on bioterrorism was developed at Lawrence Livermore National Laboratory to continuously sniff the air for airborne pathogens and toxins such as anthrax or plague. The APDS is the modern day equivalent of the canaries miners took underground with them to test for deadly carbon dioxide gas. But this canary can test for numerous bacteria, viruses, and toxins simultaneously, report results every hour, and confirm positive samples and guard against false positive results by using two different tests. The fully automated system collects and prepares air samples around the clock, does the analysis, and interprets the results. It requires no servicing or human intervention for an entire week. Unlike its feathered counterpart, when an APDS unit encounters something deadly in the air, that's when it begins singing, quietly. The APDS unit transmits a silent alert and sends detailed data to public health authorities, who can order evacuation and begin treatment of anyone exposed to toxic or biological agents. It is the latest in a series of biodefense detectors developed at DOE/NNSA national laboratories. The manual predecessor to APDS, called BASIS (for Biological Aerosol Sentry and Information System), was developed jointly by Los Alamos and Lawrence Livermore national laboratories. That system was modified to become BioWatch, the Department of Homeland Security's biological urban monitoring program. A related laboratory instrument, the Handheld Advanced Nucleic Acid Analyzer (HANAA), was first tested successfully at LLNL in September 1997. Successful partnering with private industry has been a key factor in the rapid advancement and deployment of biodefense instruments such as these

  10. Online updating of synthetic vision system databases

    NASA Astrophysics Data System (ADS)

    Simard, Philippe

    In aviation, synthetic vision systems render artificial views of the world (using a database of the world and pose information) to support navigation and situational awareness in low visibility conditions. The database needs to be periodically updated to ensure its consistency with reality, since it reflects at best a nominal state of the environment. This thesis presents an approach for automatically updating the geometry of synthetic vision system databases and 3D models in general. The approach is novel in that it profits from all of the available prior information: intrinsic/extrinsic camera parameters and geometry of the world. Geometric inconsistencies (or anomalies) between the model and reality are quickly localized; this localization serves to significantly reduce the complexity of the updating problem. Given a geometric model of the world, a sample image and known camera motion, a predicted image can be generated based on a differential approach. Model locations where predictions do not match observations are assumed to be incorrect. The updating is then cast as an optimization problem where differences between observations and predictions are minimized. To cope with system uncertainties, a mechanism that automatically infers their impact on prediction validity is derived. This method not only renders the anomaly detection process robust but also prevents the overfitting of the data. The updating framework is examined at first using synthetic data and further tested in both a laboratory environment and using a helicopter in flight. Experimental results show that the algorithm is effective and robust across different operating conditions.

  11. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  12. DLP™-based dichoptic vision test system

    PubMed Central

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3%; remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer’s sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events. PMID:20210457

  13. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  14. Flexible vision-based navigation system for unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Blasch, Erik P.

    1995-01-01

    A critical component of unmanned aerial vehicles in the navigation system which provides position and velocity feedback for autonomous control. The Georgia Tech Aerial Robotics navigational system (NavSys) consists of four DVTStinger70C Integrated Vision Units (IVUs) with CCD-based panning platforms, software, and a fiducial onboard the vehicle. The IVUs independently scan for the retro-reflective bar-code fiducial while the NavSys image processing software performs a gradient threshold followed by a image search localization of three vertical bar-code lines. Using the (x,y) image coordinate and CCD angle, the NavSys triangulates the fiducial's (x,y) position, differentiates for velocity, and relays the information to the helicopter controller, which independently determines the z direction with an onboard altimeter. System flexibility is demonstrated by recognition of different fiducial shapes, night and day time operation, and is being extended to on-board and off-board navigation of aerial and ground vehicles. The navigation design provides a real-time, inexpensive, and effective system for determining the (x,y) position of the aerial vehicle with updates generated every 51 ms (19.6 Hz) at an accuracy of approximately +/- 2.8 in.

  15. A Generic Agent Organisation Framework for Autonomic Systems

    NASA Astrophysics Data System (ADS)

    Kota, Ramachandra; Gibbins, Nicholas; Jennings, Nicholas R.

    Autonomic computing is being advocated as a tool for managing large, complex computing systems. Specifically, self-organisation provides a suitable approach for developing such autonomic systems by incorporating self-management and adaptation properties into large-scale distributed systems. To aid in this development, this paper details a generic problem-solving agent organisation framework that can act as a modelling and simulation platform for autonomic systems. Our framework describes a set of service-providing agents accomplishing tasks through social interactions in dynamically changing organisations. We particularly focus on the organisational structure as it can be used as the basis for the design, development and evaluation of generic algorithms for self-organisation and other approaches towards autonomic systems.

  16. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  17. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  18. Physics Simulation Software for Autonomous Propellant Loading and Gas House Autonomous System Monitoring

    NASA Technical Reports Server (NTRS)

    Regalado Reyes, Bjorn Constant

    2015-01-01

    1. Kennedy Space Center (KSC) is developing a mobile launching system with autonomous propellant loading capabilities for liquid-fueled rockets. An autonomous system will be responsible for monitoring and controlling the storage, loading and transferring of cryogenic propellants. The Physics Simulation Software will reproduce the sensor data seen during the delivery of cryogenic fluids including valve positions, pressures, temperatures and flow rates. The simulator will provide insight into the functionality of the propellant systems and demonstrate the effects of potential faults. This will provide verification of the communications protocols and the autonomous system control. 2. The High Pressure Gas Facility (HPGF) stores and distributes hydrogen, nitrogen, helium and high pressure air. The hydrogen and nitrogen are stored in cryogenic liquid state. The cryogenic fluids pose several hazards to operators and the storage and transfer equipment. Constant monitoring of pressures, temperatures and flow rates are required in order to maintain the safety of personnel and equipment during the handling and storage of these commodities. The Gas House Autonomous System Monitoring software will be responsible for constantly observing and recording sensor data, identifying and predicting faults and relaying hazard and operational information to the operators.

  19. Conducting IPN actuators for biomimetic vision system

    NASA Astrophysics Data System (ADS)

    Festin, Nicolas; Plesse, Cedric; Chevrot, Claude; Teyssié, Dominique; Pirim, Patrick; Vidal, Frederic

    2011-04-01

    In recent years, many studies on electroactive polymer (EAP) actuators have been reported. One promising technology is the elaboration of electronic conducting polymers based actuators with Interpenetrating Polymer Networks (IPNs) architecture. Their many advantageous properties as low working voltage, light weight and high lifetime (several million cycles) make them very attractive for various applications including robotics. Our laboratory recently synthesized new conducting IPN actuators based on high molecular Nitrile Butadiene Rubber, poly(ethylene oxide) derivative and poly(3,4-ethylenedioxithiophene). The presence of the elastomer greatly improves the actuator performances such as mechanical resistance and output force. In this article we present the IPN and actuator synthesis, characterizations and design allowing their integration in a biomimetic vision system.

  20. Vision-based registration for augmented reality system using monocular and binocular vision

    NASA Astrophysics Data System (ADS)

    Vallerand, Steve; Kanbara, Masayuki; Yokoya, Naokazu

    2003-05-01

    In vision-based augmented reality systems, the relation between the real and virtual worlds needs to be estimated to perform the registration of virtual objects. This paper suggests a vision-based registration method for video see-through augmented reality systems using binocular cameras which increases the quality of the registration performed using three points of a known marker. The originality of this work is the use of both monocular vision-based and stereoscopic vision-based techniques in order to complete the registration. Also, a method that performs a correction of the 2D positions in the images of the marker points is proposed. The correction improves the registration stability and accuracy of the system. The stability of the registration obtained with the proposed registration method combined or not with the correction method is compared to the registration obtained with standard stereoscopic registration.

  1. Formal Methods for Autonomic and Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Swarms of intelligent rovers and spacecraft are being considered for a number of future NASA missions. These missions will provide MSA scientist and explorers greater flexibility and the chance to gather more science than traditional single spacecraft missions. These swarms of spacecraft are intended to operate for large periods of time without contact with the Earth. To do this, they must be highly autonomous, have autonomic properties and utilize sophisticated artificial intelligence. The Autonomous Nano Technology Swarm (ANTS) mission is an example of one of the swarm type of missions NASA is considering. This mission will explore the asteroid belt using an insect colony analogy cataloging the mass, density, morphology, and chemical composition of the asteroids, including any anomalous concentrations of specific minerals. Verifying such a system would be a huge task. This paper discusses ongoing work to develop a formal method for verifying swarm and autonomic systems.

  2. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  3. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  4. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  5. Flight test comparison between enhanced vision (FLIR) and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-05-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA"s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  6. Nuclear bimodal new vision solar system missions

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision {open_quote}{open_quote}constellation{close_quote}{close_quote} type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. {copyright} {ital 1996 American Institute of Physics.}

  7. Achieving safe autonomous landings on Mars using vision-based approaches

    NASA Technical Reports Server (NTRS)

    Pien, Homer

    1992-01-01

    Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.

  8. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  9. Development of an Automatic Identification System Autonomous Positioning System

    PubMed Central

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-01-01

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts. PMID:26569258

  10. Development of an Automatic Identification System Autonomous Positioning System.

    PubMed

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-01-01

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts. PMID:26569258

  11. Defence R&D Canada's autonomous intelligent systems program

    NASA Astrophysics Data System (ADS)

    Digney, Bruce L.; Hubbard, Paul; Gagnon, Eric; Lauzon, Marc; Rabbath, Camille; Beckman, Blake; Collier, Jack A.; Penzes, Steven G.; Broten, Gregory S.; Monckton, Simon P.; Trentini, Michael; Kim, Bumsoo; Farell, Philip; Hopkin, Dave

    2004-09-01

    The Defence Research and Development Canada's (DRDC has been given strategic direction to pursue research to increase the independence and effectiveness of military vehicles and systems. This has led to the creation of the Autonomous Intelligent Systems (AIS) prgram and is notionally divide into air, land and marine vehicle systems as well as command, control and decision support systems. This paper presents an overarching description of AIS research issues, challenges and directions as well as a nominal path that vehicle intelligence will take. The AIS program requires a very close coordination between research and implementation on real vehicles. This paper briefly discusses the symbiotic relationship between intelligence algorithms and implementation mechanisms. Also presented are representative work from two vehicle specific research program programs. Work from the Autonomous Air Systems program discusses the development of effective cooperate control for multiple air vehicle. The Autonomous Land Systems program discusses its developments in platform and ground vehicle intelligence.

  12. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  13. Differential responses of components of the autonomic nervous system.

    PubMed

    Goldstein, David S

    2013-01-01

    This chapter conveys several concepts and points of view about the scientific and medical significance of differential alterations in activities of components of the autonomic nervous system in stress and disease. The use of terms such as "the autonomic nervous system," "autonomic failure," "dysautonomia," and "autonomic dysfunction" imply the existence of a single entity; however, the autonomic nervous system has functionally and neurochemically distinctive components, which are reflected in differential responses to stressors and differential involvement in pathophysiologic states. One can conceptualize the autonomic nervous system as having at least five components: the sympathetic noradrenergic system, the sympathetic cholinergic system, the parasympathetic cholinergic system, the sympathetic adrenergic system, and the enteric nervous system. Evidence has accumulated for differential noradrenergic vs. adrenergic responses in various situations. The largest sympathetic adrenergic system responses are seen when the organism encounters stressors that pose a global or metabolic threat. Sympathetic noradrenergic system activation dominates the responses to orthostasis, moderate exercise, and exposure to cold, whereas sympathetic adrenergic system activation dominates those to glucoprivation and emotional distress. There seems to be at least as good a justification for the concept of coordinated adrenocortical-adrenomedullary responses as for coordinated adrenomedullary-sympathoneural responses in stress. Fainting reactions involve differential adrenomedullary hormonal vs. sympathetic noradrenergic activation. Parkinson disease entails relatively selective dysfunction of the sympathetic noradrenergic system, with prominent loss of noradrenergic nerves in the heart, yet normal adrenomedullary function. Allostatic load links stress with degenerative diseases, and Parkinson disease may be a disease of the elderly because of allostatic load. PMID:24095112

  14. The Function of the Autonomic Nervous System during Spaceflight

    PubMed Central

    Mandsager, Kyle Timothy; Robertson, David; Diedrich, André

    2015-01-01

    Introduction Despite decades of study, a clear understanding of autonomic nervous system activity in space remains elusive. Differential interpretation of fundamental data have driven divergent theories of sympathetic activation and vasorelaxation. Methods This paper will review the available in-flight autonomic and hemodynamic data in an effort to resolve these discrepancies. The NASA NEUROLAB mission, the most comprehensive assessment of autonomic function in microgravity to date, will be highlighted. The mechanisms responsible for altered autonomic activity during spaceflight, which include the effects of hypovolemia, cardiovascular deconditioning, and altered central processing, will be presented. Results The NEUROLAB experiments demonstrated increased sympathetic activity and impairment of vagal baroreflex function during short-duration spaceflight. Subsequent non-invasive studies of autonomic function during spaceflight have largely reinforced these findings, and provide strong evidence that sympathetic activity is increased in space relative to the supine position on Earth. Others have suggested that microgravity induces a state of relative vasorelaxation and increased vagal activity when compared to upright posture on Earth. These ostensibly disparate theories are not mutually exclusive, but rather directly reflect different pre-flight postural controls. Conclusion When these results are taken together, they demonstrate that the effectual autonomic challenge of spaceflight is small, and represents an orthostatic stress less than that of upright posture on Earth. In-flight countermeasures, including aerobic and resistance exercise, as well as short-arm centrifugation have been successfully deployed to counteract these mechanisms. Despite subtle changes in autonomic activity during spaceflight, underlying neurohumoral mechanisms of the autonomic nervous system remain intact and cardiovascular function remains stable during long-duration flight. PMID:25820827

  15. Improving Human/Autonomous System Teaming Through Linguistic Analysis

    NASA Technical Reports Server (NTRS)

    Meszaros, Erica L.

    2016-01-01

    An area of increasing interest for the next generation of aircraft is autonomy and the integration of increasingly autonomous systems into the national airspace. Such integration requires humans to work closely with autonomous systems, forming human and autonomous agent teams. The intention behind such teaming is that a team composed of both humans and autonomous agents will operate better than homogenous teams. Procedures exist for licensing pilots to operate in the national airspace system and current work is being done to define methods for validating the function of autonomous systems, however there is no method in place for assessing the interaction of these two disparate systems. Moreover, currently these systems are operated primarily by subject matter experts, limiting their use and the benefits of such teams. Providing additional information about the ongoing mission to the operator can lead to increased usability and allow for operation by non-experts. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables the prediction of the operator's intent and allows the interface to supply the operator's desired information.

  16. Intensity measurement of automotive headlamps using a photometric vision system

    NASA Astrophysics Data System (ADS)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  17. Vision-based real-time obstacle detection and tracking for autonomous vehicle guidance

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Yu, Qian; Wang, Hong; Zhang, Bo

    2002-03-01

    The ability of obstacles detection and tracking is essential for safe visual guidance of autonomous vehicles, especially in urban environments. In this paper, we first overview different plane projective transformation (PPT) based obstacle detection approaches under the planar ground assumption. Then, we give a simple proof of this approach with relative affine, a unified framework that includes the Euclidean, projective and affine frameworks by generalization and specialization. Next, we present a real-time hybrid obstacle detection method, which combined the PPT based method with the region segmentation based method to provide more accurate locations of obstacles. At last, with the vehicle's position information, a Kalman Filter is applied to track obstacles from frame to frame. This method has been tested on THMR-V (Tsinghua Mobile Robot V). Through various experiments we successfully demonstrate its real-time performance, high accuracy, and high robustness.

  18. A design strategy for autonomous systems

    NASA Technical Reports Server (NTRS)

    Forster, Pete

    1989-01-01

    Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.

  19. Advances in Autonomous Systems for Missions of Space Exploration

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Briggs, G. A.; Hieronymus, J.; Clancy, D. J.

    New missions of space exploration will require unprecedented levels of autonomy to successfully accomplish their objectives. Both inherent complexity and communication distances will preclude levels of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of meeting the greatly increased space exploration requirements, along with dramatically reduced design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health monitoring and maintenance capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of space exploration, since the science and operational requirements specified by such missions, as well as the budgetary constraints that limit the ability to monitor and control these missions by a standing army of ground- based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communications distance as are not otherwise possible, as well as many more efficient and low cost

  20. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    NASA Technical Reports Server (NTRS)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  1. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  2. Systems, methods and apparatus for quiesence of autonomic systems with self action

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided in which an autonomic unit or element is quiesced. A quiesce component of an autonomic unit can cause the autonomic unit to self-destruct if a stay-alive reprieve signal is not received after a predetermined time.

  3. Autonomous rendezvous and feature detection system using TV imagery

    NASA Technical Reports Server (NTRS)

    Rice, R. B., Jr.

    1977-01-01

    Algorithms and equations are used for conversion of standard television imaging system information into directly usable spatial and dimensional information. System allows utilization of spacecraft imagery system as sensor in application to operations such as deriving spacecraft steering signal, tracking, autonomous rendezvous and docking and ranging.

  4. Autonomous navigation system based on GPS and magnetometer data

    NASA Technical Reports Server (NTRS)

    Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)

    2004-01-01

    This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.

  5. Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic

    PubMed Central

    McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.

    2014-01-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  6. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  7. High Speed Research - External Vision System (EVS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Imagine flying a supersonic passenger jet (like the Concorde) at 1500 mph with no front windows in the cockpit - it may one day be a reality, as seen in this animation still. NASA engineers are working to develop technology that would replace the forward cockpit windows in future supersonic passenger jets with large sensor displays. These displays would use video images, enhanced by computer-generated graphics, to take the place of the view out the front windows. The envisioned eXternal Visibility System (XVS) would guide pilots to an airport, warn them of other aircraft near their path, and provide additional visual aides for airport approaches, landings and takeoffs. Currently, supersonic transports like the Anglo-French Concorde droop the front of the jet (the 'nose') downward to allow the pilots to see forward during takeoffs and landings. By enhancing the pilots' vision with high-resolution video displays, future supersonic transport designers could eliminate the heavy and expensive, mechanically-drooped nose. A future U.S. supersonic passenger jet, as envisioned by NASA's High-Speed Research (HSR) program, would carry 300 passengers more than 5000 nautical miles per hour more than 1500 miles per hour (more than twice the speed of sound). Traveling from Los Angeles to Tokyo would take only four hours, with an anticipated fare increase of only 20 percent over current ticket prices for substantially slower subsonic flights. Animation by Joey Ponthieux, Computer Sciences Corporation, Inc.

  8. Lighting And Optics Expert System For Machine Vision

    NASA Astrophysics Data System (ADS)

    Novini, Amir

    1989-03-01

    Machine Vision and the field of Artificial Intelligence are both new technologies which have evolved mainly within the past decade with the growth of computers and microchips. And, although research continues, both have emerged from the experimental state to industrial reality. Today's machine vision systems are solving thousands of manufacturing problems in various industries, and the impact of Artificial Intelligence, and more specifically, the use of "Expert Systems" in industry is also being realized. This paper will examine how the two technologies can cross paths, and how an Expert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics Expert System was developed to assist the end user to configure the "Front End" of a vision system to help solve the overall machine vision problem more effectively, since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be discussed.

  9. Lighting And Optics Expert System For Machine Vision

    NASA Astrophysics Data System (ADS)

    Novini, Amir

    1988-12-01

    Machine Vision and the field of Artificial Intelligence are both new technologies which have evolved mainly within the past decade with the growth of computers and microchips. And, although research continues, both have emerged from the experimental state to industrial reality. Today's machine vision systems are solving thousands of manufacturing problems in various industries, and the impact of Artificial Intelligence, and more specifically, the use of "Expert Systems" in industry is also being realized. This paper will examine how the two technologies can cross paths, and how an Expert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics Expert System was developed to assist the end user to configure the "Front End" of a vision system to help solve the overall machine vision problem more effectively, since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be discussed.

  10. Advances in Autonomous Systems for Missions of Space Exploration

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Briggs, G. A.; Hieronymus, J.; Clancy, D. J.

    New missions of space exploration will require unprecedented levels of autonomy to successfully accomplish their objectives. Both inherent complexity and communication distances will preclude levels of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of meeting the greatly increased space exploration requirements, along with dramatically reduced design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health monitoring and maintenance capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of space exploration, since the science and operational requirements specified by such missions, as well as the budgetary constraints that limit the ability to monitor and control these missions by a standing army of ground- based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communications distance as are not otherwise possible, as well as many more efficient and low cost

  11. Space station automation study: Autonomous systems and assembly, volume 2

    NASA Technical Reports Server (NTRS)

    Bradford, K. Z.

    1984-01-01

    This final report, prepared by Martin Marietta Denver Aerospace, provides the technical results of their input to the Space Station Automation Study, the purpose of which is to develop informed technical guidance in the use of autonomous systems to implement space station functions, many of which can be programmed in advance and are well suited for automated systems.

  12. Lighting and optics expert system for machine vision

    NASA Astrophysics Data System (ADS)

    Novini, Amir R.

    1991-03-01

    Machine Vision and the field of Artificial Intelligence are both new technologies which hive evolved mainly within the past decade with the growth of computers and microchips. And although research continues both have emerged from tF experimental state to industrial reality. Today''s machine vision systEns are solving thousands of manufacturing problems in various industries and the impact of Artificial Intelligence and more specifically the ue of " Expert Systems" in industry is also being realized. This pape will examine how the two technologies can cross paths and how an E7ert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics xpert System was developed to assist the end user to configure the " Front End" of a vision system to help solve the overall machine vision problem more effectively since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be ciscussed.

  13. Autonomous satellite navigation with the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Fuchs, A. J.; Wooden, W. H., II; Long, A. C.

    1977-01-01

    This paper discusses the potential of using the Global Positioning System (GPS) to provide autonomous navigation capability to NASA satellites in the 1980 era. Some of the driving forces motivating autonomous navigation are presented. These include such factors as advances in attitude control systems, onboard science annotation, and onboard gridding of imaging data. Simulation results which demonstrate baseline orbit determination accuracies using GPS data on Seasat, Landsat-D, and the Solar Maximum Mission are presented. Emphasis is placed on identifying error sources such as GPS time, GPS ephemeris, user timing biases, and user orbit dynamics, and in a parametric sense on evaluating their contribution to the orbit determination accuracies.

  14. An autonomous control system for boiler-turbine units

    SciTech Connect

    Ben-Abdennour, A.; Lee, K.Y.

    1996-06-01

    Achieving a more autonomous power plant operation is an important part of power plant control. To be autonomous, a control system needs to provide adequate control actions in the presence of significant uncertainties and/or disturbances, such as actuator or component failures, with minimum or no human assistance. However, a reasonable degree of autonomy is difficult to obtain without incorporating intelligence in the control system. This paper presents a coordinated intelligent control scheme with a high degree of autonomy. In this scheme, a Fuzzy-Logic based supervisor monitors the overall plant operation and carries the tasks of coordination, fault diagnosis, fault isolation, and fault accommodation.

  15. Turning a remotely controllable observatory into a fully autonomous system

    NASA Astrophysics Data System (ADS)

    Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael

    2014-08-01

    We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.

  16. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features. Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths. As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone

  17. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  18. Implicit numerical integration for periodic solutions of autonomous nonlinear systems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1982-01-01

    A change of variables that stabilizes numerical computations for periodic solutions of autonomous systems is derived. Computation of the period is decoupled from the rest of the problem for conservative systems of any order and for any second-order system. Numerical results are included for a second-order conservative system under a suddenly applied constant load. Near the critical load for the system, a small increment in load amplitude results in a large increase in amplitude of the response.

  19. Choosing the right video interface for military vision systems

    NASA Astrophysics Data System (ADS)

    Phillips, John

    2015-05-01

    This paper discusses how GigE Vision® video interfaces - the technology used to transfer data from a camera or image sensor to a mission computer or display - help designers reduce the cost and complexity of military imaging systems, while also improving usability and increasing intelligence for end-users. The paper begins with a detailed review of video connectivity approaches commonly used in military imaging systems, followed by an overview on the GigE Vision standard. With this background, the design, cost, and performance benefits that can be achieved when employing GigE Vision-compliant video interfaces in a vetronics retrofit upgrade project are outlined.

  20. Using Multimodal Input for Autonomous Decision Making for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Neilan, James H.; Cross, Charles; Rothhaar, Paul; Tran, Loc; Motter, Mark; Qualls, Garry; Trujillo, Anna; Allen, B. Danette

    2016-01-01

    Autonomous decision making in the presence of uncertainly is a deeply studied problem space particularly in the area of autonomous systems operations for land, air, sea, and space vehicles. Various techniques ranging from single algorithm solutions to complex ensemble classifier systems have been utilized in a research context in solving mission critical flight decisions. Realized systems on actual autonomous hardware, however, is a difficult systems integration problem, constituting a majority of applied robotics development timelines. The ability to reliably and repeatedly classify objects during a vehicles mission execution is vital for the vehicle to mitigate both static and dynamic environmental concerns such that the mission may be completed successfully and have the vehicle operate and return safely. In this paper, the Autonomy Incubator proposes and discusses an ensemble learning and recognition system planned for our autonomous framework, AEON, in selected domains, which fuse decision criteria, using prior experience on both the individual classifier layer and the ensemble layer to mitigate environmental uncertainty during operation.

  1. Expert system isssues in automated, autonomous space vehicle rendezvous

    NASA Technical Reports Server (NTRS)

    Goodwin, Mary Ann; Bochsler, Daniel C.

    1987-01-01

    The problems involved in automated autonomous rendezvous are briefly reviewed, and the Rendezvous Expert (RENEX) expert system is discussed with reference to its goals, approach used, and knowledge structure and contents. RENEX has been developed to support streamlining operations for the Space Shuttle and Space Station program and to aid definition of mission requirements for the autonomous portions of rendezvous for the Mars Surface Sample Return and Comet Nucleus Sample return unmanned missions. The experience with REMEX to date and recommendations for further development are presented.

  2. Mamdani Fuzzy System for Indoor Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.

    2011-06-01

    Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.

  3. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  4. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  5. Standard machine vision systems used in different industrial applications

    NASA Astrophysics Data System (ADS)

    Bruehl, Wolfgang

    1993-12-01

    Fully standardized machine vision systems won't require task specific hard- or software development. This allows short project realization times at minimized cost. This paper describes two very different applications which were realized only by menu-guided configuration of the QueCheck standard machine vision system. The first is an in-line survey of oilpump castings necessary to protect the following working machine from being damaged by castings not according to the specified geometrical measures. The second application shows the replacement of time consuming manual particle size analysis of fertilizer pellets, by a continuous analysis with a vision system. At the same time the data of the vision system can be used to optimize particle size during production.

  6. Autonomous Frequency-Domain System-Identification Program

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Mettler, Edward; Bayard, David S.; Hadaegh, Fred Y.; Milman, Mark H.; Scheid, Robert E.

    1993-01-01

    Autonomous Frequency Domain Identification (AU-FREDI) computer program implements system of methods, algorithms, and software developed for identification of parameters of mathematical models of dynamics of flexible structures and characterization, by use of system transfer functions, of such models, dynamics, and structures regarded as systems. Software considered collection of routines modified and reassembled to suit system-identification and control experiments on large flexible structures.

  7. Airborne Use of Night Vision Systems

    NASA Astrophysics Data System (ADS)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  8. Building Artificial Vision Systems with Machine Learning

    SciTech Connect

    LeCun, Yann

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  9. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  10. Is There Anything "Autonomous" in the Nervous System?

    ERIC Educational Resources Information Center

    Rasia-Filho, Alberto A.

    2006-01-01

    The terms "autonomous" or "vegetative" are currently used to identify one part of the nervous system composed of sympathetic, parasympathetic, and gastrointestinal divisions. However, the concepts that are under the literal meaning of these words can lead to misconceptions about the actual nervous organization. Some clear-cut examples indicate…

  11. Random attractor of non-autonomous stochastic Boussinesq lattice system

    SciTech Connect

    Zhao, Min Zhou, Shengfan

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  12. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  13. Single-computer HWIL simulation facility for real-time vision systems

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Ernst D.

    1998-07-01

    UBM is working on autonomous vision systems for aircraft for more than one and a half decades by now. The systems developed use standard on-board sensors and two additional monochrome cameras for state estimation of the aircraft. A common task is to detect and track a runway for an autonomous landing approach. The cameras have different focal lengths and are mounted on a special pan and tilt camera platform. As the platform is equipped with two resolvers and two gyros it can be stabilized inertially and the system has the ability to actively focus on the objects of highest interest. For verification and testing, UBM has a special HWIL simulation facility for real-time vision systems. Central part of this simulation facility is a three axis motion simulator (DBS). It is used to realize the computed orientation in the rotational degrees of freedom of the aircraft. The two-axis camera platform with its two CCD-cameras is mounted on the inner frame of the DBS and is pointing at the cylindrical projection screen with a synthetic view displayed on it. As the performance of visual perception systems has increased significantly in recent years, a new, more powerful synthetic vision system was required. A single Onyx2 machine replaced all the former simulation computers. This computer is powerful enough to simulate the aircraft, to generate a high-resolution synthetic view, to control the DBS and to communicate with the image processing computers. Further improvements are the significantly reduced delay times for closed loop simulations and the elimination of communication overhead.

  14. Human Factors And Safety Considerations Of Night Vision Systems Flight

    NASA Astrophysics Data System (ADS)

    Verona, Robert W.; Rash, Clarence E.

    1989-03-01

    Military aviation night vision systems greatly enhance the capability to operate during periods of low illumination. After flying with night vision devices, most aviators are apprehensive about returning to unaided night flight. Current night vision imaging devices allow aviators to fly during ambient light conditions which would be extremely dangerous, if not impossible, with unaided vision. However, the visual input afforded with these devices does not approach that experienced using the unencumbered, unaided eye during periods of daylight illumination. Many visual parameters, e,g., acuity, field-of-view, depth perception, etc., are compromised when night vision devices are used. The inherent characteristics of image intensification based sensors introduce new problems associated with the interpretation of visual information based on different spatial and spectral content from that of unaided vision. In addition, the mounting of these devices onto the helmet is accompanied by concerns of fatigue resulting from increased head supported weight and shift in center-of-gravity. All of these concerns have produced numerous human factors and safety issues relating to thb use of night vision systems. These issues are identified and discussed in terms of their possible effects on user performance and safety.

  15. Central- and autonomic nervous system coupling in schizophrenia.

    PubMed

    Schulz, Steffen; Bolz, Mathias; Bär, Karl-Jürgen; Voss, Andreas

    2016-05-13

    The autonomic nervous system (ANS) dysfunction has been well described in schizophrenia (SZ), a severe mental disorder. Nevertheless, the coupling between the ANS and central brain activity has been not addressed until now in SZ. The interactions between the central nervous system (CNS) and ANS need to be considered as a feedback-feed-forward system that supports flexible and adaptive responses to specific demands. For the first time, to the best of our knowledge, this study investigates central-autonomic couplings (CAC) studying heart rate, blood pressure and electroencephalogram in paranoid schizophrenic patients, comparing them with age-gender-matched healthy subjects (CO). The emphasis is to determine how these couplings are composed by the different regulatory aspects of the CNS-ANS. We found that CAC were bidirectional, and that the causal influence of central activity towards systolic blood pressure was more strongly pronounced than such causal influence towards heart rate in paranoid schizophrenic patients when compared with CO. In paranoid schizophrenic patients, the central activity was a much stronger variable, being more random and having fewer rhythmic oscillatory components. This study provides a more in-depth understanding of the interplay of neuronal and autonomic regulatory processes in SZ and most likely greater insights into the complex relationship between psychotic stages and autonomic activity. PMID:27044986

  16. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  17. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration... WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be...

  18. Thinking Ahead: Autonomic Buildings

    SciTech Connect

    Brambley, Michael R. )

    2002-08-31

    The time has come for the commercial buildings industries to reconsider the very nature of the systems installed in facilities today and to establish a vision for future buildings that differs from anything in the history of human shelter. Drivers for this examination include reductions in building operation staffs; uncertain costs and reliability of electric power; growing interest in energy-efficient and resource-conserving?green? and?high-performance? commercial buildings; and a dramatic increase in security concerns since the tragic events of September 11. This paper introduces a new paradigm? autonomic buildings? which parallels the concept of autonomic computing, introduced by IBM as a fundamental change in the way computer networks work. Modeled after the human nervous system,?autonomic systems? themselves take responsibility for a large portion of their own operation and even maintenance. For commercial buildings, autonomic systems could provide environments that afford occupants greater opportunity to focus on the things we do in buildings rather than on operation of the building itself, while achieving higher performance levels, increased security, and better use of energy and other natural resources. The author uses the human body and computer networking to introduce and illustrate this new paradigm for high-performance commercial buildings. He provides a vision for the future of commercial buildings based on autonomicity, identifies current research that could contribute to this future, and highlights research and technological gaps. The paper concludes with a set of issues and needs that are key to converting this idealized future into reality.

  19. An autonomous rendezvous and docking system using cruise missile technologies

    NASA Technical Reports Server (NTRS)

    Jones, Ruel Edwin

    1991-01-01

    In November 1990 the Autonomous Rendezvous & Docking (AR&D) system was first demonstrated for members of NASA's Strategic Avionics Technology Working Group. This simulation utilized prototype hardware from the Cruise Missile and Advanced Centaur Avionics systems. The object was to show that all the accuracy, reliability and operational requirements established for a space craft to dock with Space Station Freedom could be met by the proposed system. The rapid prototyping capabilities of the Advanced Avionics Systems Development Laboratory were used to evaluate the proposed system in a real time, hardware in the loop simulation of the rendezvous and docking reference mission. The simulation permits manual, supervised automatic and fully autonomous operations to be evaluated. It is also being upgraded to be able to test an Autonomous Approach and Landing (AA&L) system. The AA&L and AR&D systems are very similar. Both use inertial guidance and control systems supplemented by GPS. Both use an Image Processing System (IPS), for target recognition and tracking. The IPS includes a general purpose multiprocessor computer and a selected suite of sensors that will provide the required relative position and orientation data. Graphic displays can also be generated by the computer, providing the astronaut / operator with real-time guidance and navigation data with enhanced video or sensor imagery.

  20. Blackboard architectures and their relationship to autonomous space systems

    NASA Technical Reports Server (NTRS)

    Thornbrugh, Allison

    1988-01-01

    The blackboard architecture provides a powerful paradigm for the autonomy expected in future spaceborne systems, especially SDI and Space Station. Autonomous systems will require skill in both the classic task of information analysis and the newer tasks of decision making, planning and system control. Successful blackboard systems have been built to deal with each of these tasks separately. The blackboard paradigm achieves success in difficult domains through its ability to integrate several uncertain sources of knowledge. In addition to flexible behavior during autonomous operation, the system must also be capable of incrementally growing from semiautonomy to full autonomy. The blackboard structure allows this development. The blackboard's ability to handle error, its flexible execution, and variants of this paradigm are discussed as they apply to specific problems of the space environment.

  1. Immune systems are not just for making you feel better: they are for controlling autonomous robots

    NASA Astrophysics Data System (ADS)

    Rosenblum, Mark

    2005-05-01

    The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.

  2. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  3. Scheduling lessons learned from the Autonomous Power System

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  4. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms. PMID:19203859

  5. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)

    1996-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  6. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard (Inventor)

    1994-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  7. Design principle of the peripheral vision display system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaowei; Wang, Yuefeng; Niu, Yanxiong; Yu, Lishen; Liu, Shen H.

    1996-09-01

    The peripheral vision display system (PVDS) presents the pilot with a gyro stabilized artificial horizon projected onto the instrument panel by means of a red laser light source. The pilot can detect changes to aircraft attitude without continuously referring back to his flight instruments. The PVDS effectively applies the peripheral vision of the pilot to overcome disorientation. This paper gives the principles of the PVDS, according to which, we have designed the PVDS and used it for aviation medicine.

  8. Multisensor robotic system for autonomous space maintenance and repair

    NASA Technical Reports Server (NTRS)

    Abidi, M. A.; Green, W. L.; Chandra, T.; Spears, J.

    1988-01-01

    The feasibility of realistic autonomous space manipulation tasks using multisensory information is demonstrated. The system is capable of acquiring, integrating, and interpreting multisensory data to locate, mate, and demate a Fluid Interchange System (FIS) and a Module Interchange System (MIS). In both cases, autonomous location of a guiding light target, mating, and demating of the system are performed. Implemented visio-driven techniques are used to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. A force/torque sensor continuously monitors the six components of force and torque exerted on the end-effector. Both FIS and MIS experiments were successfully accomplished on mock-ups built for this purpose. The method is immune to variations in the ambient light, in particular because of the 90-minute day-night shift in space.

  9. Optical 3D laser measurement system for navigation of autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Básaca-Preciado, Luis C.; Sergiyenko, Oleg Yu.; Rodríguez-Quinonez, Julio C.; García, Xochitl; Tyrsa, Vera V.; Rivas-Lopez, Moises; Hernandez-Balbuena, Daniel; Mercorelli, Paolo; Podrygalo, Mikhail; Gurko, Alexander; Tabakova, Irina; Starostenko, Oleg

    2014-03-01

    In our current research, we are developing a practical autonomous mobile robot navigation system which is capable of performing obstacle avoiding task on an unknown environment. Therefore, in this paper, we propose a robot navigation system which works using a high accuracy localization scheme by dynamic triangulation. Our two main ideas are (1) integration of two principal systems, 3D laser scanning technical vision system (TVS) and mobile robot (MR) navigation system. (2) Novel MR navigation scheme, which allows benefiting from all advantages of precise triangulation localization of the obstacles, mostly over known camera oriented vision systems. For practical use, mobile robots are required to continue their tasks with safety and high accuracy on temporary occlusion condition. Presented in this work, prototype II of TVS is significantly improved over prototype I of our previous publications in the aspects of laser rays alignment, parasitic torque decrease and friction reduction of moving parts. The kinematic model of the MR used in this work is designed considering the optimal data acquisition from the TVS with the main goal of obtaining in real time, the necessary values for the kinematic model of the MR immediately during the calculation of obstacles based on the TVS data.

  10. Verification of autonomous systems using embedded behavior auditors

    NASA Astrophysics Data System (ADS)

    Dvorak, Daniel; Tailor, Eric

    1999-01-01

    The prospect of highly autonomous spacecraft and rovers is exciting for what they can do with onboard decision making, but also troubling for what they might do [improperly] without human-in-the-loop oversight. The single biggest obstacle to acceptance of highly autonomous software control systems is doubt about their trustworthiness as a replacement for human analysis and decision-making. Such doubts can be addressed with a comprehensive system verification effort, but techniques suitable for conventional sequencer-based systems are inadequate for reactive systems. This paper highlights some of the key features that distinguish autonomous systems from their predecessors and then focuses on one approach to aid in their verification using a ``lightweight'' formal method. Specifically, we present a little language that enables system engineers and designers to specify expected behavior in the form of invariants, state machines, episodes, and resource constraints, and a way of compiling such specifications and linking them into the operational code as embedded behavior auditors. Such auditors become part of the overall fault-detection design, checking system behavior in real-time, not only in the test-bed but also in flight.

  11. The organization of an autonomous learning system

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    The organization of systems that learn from experience is examined, human beings and animals being prime examples of such systems. How is their information processing organized. They build an internal model of the world and base their actions on the model. The model is dynamic and predictive, and it includes the systems' own actions and their effects. In modeling such systems, a large pattern of features represents a moment of the system's experience. Some of the features are provided by the system's senses, some control the system's motors, and the rest have no immediate external significance. A sequence of such patterns then represents the system's experience over time. By storing such sequences appropriately in memory, the system builds a world model based on experience. In addition to the essential function of memory, fundamental roles are played by a sensory system that makes raw information about the world suitable for memory storage and by a motor system that affects the world. The relation of sensory and motor systems to the memory is discussed, together with how favorable actions can be learned and unfavorable actions can be avoided. Results in classical learning theory are explained in terms of the model, more advanced forms of learning are discussed, and the relevance of the model to the frame problem of robotics is examined.

  12. Large autonomous spacecraft electrical power system (LASEPS)

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Johnson, Yvette B.

    1992-01-01

    NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed.

  13. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  14. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    SciTech Connect

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  15. Science requirements for PRoViScout, a robotics vision system for planetary exploration

    NASA Astrophysics Data System (ADS)

    Hauber, E.; Pullan, D.; Griffiths, A.; Paar, G.

    2011-10-01

    The robotic exploration of planetary surfaces, including missions of interest for geobiology (e.g., ExoMars), will be the precursor of human missions within the next few decades. Such exploration will require platforms which are much more self-reliant and capable of exploring long distances with limited ground support in order to advance planetary science objectives in a timely manner. The key to this objective is the development of planetary robotic onboard vision processing systems, which will enable the autonomous on-site selection of scientific and mission-strategic targets, and the access thereto. The EU-funded research project PRoViScout (Planetary Robotics Vision Scout) is designed to develop a unified and generic approach for robotic vision onboard processing, namely the combination of navigation and scientific target selection. Any such system needs to be "trained", i.e. it needs (a) scientific requirements which the system needs to address, and (b) a data base of scientifically representative target scenarios which can be analysed. We present our preliminary list of science requirements, based on previous experience from landed Mars missions.

  16. Assessment of autonomic nervous system function in nursing students using an autonomic reflex orthostatic test by heart rate spectral analysis

    PubMed Central

    HASEGAWA, MAO; HAYANO, AZUSA; KAWAGUCHI, ATSUSHI; YAMANAKA, RYUYA

    2015-01-01

    Nursing students experience academic demands, such as tests, theoretical and practical coursework, research activities, various aspects of professional practice, and contact with health professionals and patients. Consequently, nursing students face numerous types of stress, and increased stress levels contribute to physical and psychological distress in nursing students. The aim of the present study was to investigate the autonomic nervous system function of nursing students by assessing active standing load using the autonomic reflex orthostatic tolerance test, which enables quantitative analysis of dynamic autonomic nervous system function. The autonomic nervous system activity in the resting state was low in fourth-year students, they had parasympathetic hypotension, and there was a tendency towards higher sympathetic nervous system activity of fourth-year students compared with first-, second- and third-year students. In the standing state, there was a trend towards a higher autonomic nervous system activity response of fourth-year students compared with first-, second- and third-year students. These results suggest that stress may influence autonomic nervous activity in fourth-year nursing students. By correcting stress in fourth-year nursing students, it may be possible to prevent the development of health problems. PMID:26623025

  17. Multi-agent autonomous system and method

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A method of controlling a plurality of crafts in an operational area includes providing a command system, a first craft in the operational area coupled to the command system, and a second craft in the operational area coupled to the command system. The method further includes determining a first desired destination and a first trajectory to the first desired destination, sending a first command from the command system to the first craft to move a first distance along the first trajectory, and moving the first craft according to the first command. A second desired destination and a second trajectory to the second desired destination are determined and a second command is sent from the command system to the second craft to move a second distance along the second trajectory.

  18. Intelligent systems for the autonomous exploration of Titan and Enceladus

    NASA Astrophysics Data System (ADS)

    Furfaro, Roberto; Lunine, Jonathan I.; Kargel, Jeffrey S.; Fink, Wolfgang

    2008-04-01

    Future planetary exploration of the outer satellites of the Solar System will require higher levels of onboard automation, including autonomous determination of sites where the probability of significant scientific findings is highest. Generally, the level of needed automation is heavily influenced by the distance between Earth and the robotic explorer(s) (e.g. spacecraft(s), rover(s), and balloon(s)). Therefore, planning missions to the outer satellites mandates the analysis, design and integration within the mission architecture of semi- and/or completely autonomous intelligence systems. Such systems should (1) include software packages that enable fully automated and comprehensive identification, characterization, and quantification of feature information within an operational region with subsequent target prioritization and selection for close-up reexamination; and (2) integrate existing information with acquired, "in transit" spatial and temporal sensor data to automatically perform intelligent planetary reconnaissance, which includes identification of sites with the highest potential to yield significant geological and astrobiological information. In this paper we review and compare some of the available Artificial Intelligence (AI) schemes and their adaptation to the problem of designing expert systems for onboard-based, autonomous science to be performed in the course of outer satellites exploration. More specifically, the fuzzy-logic framework proposed is analyzed in some details to show the effectiveness of such a scheme when applied to the problem of designing expert systems capable of identifying and further exploring regions on Titan and/or Enceladus that have the highest potential to yield evidence for past or present life. Based on available information (e.g., Cassini data), the current knowledge and understanding of Titan and Enceladus environments is evaluated to define a path for the design of a fuzzy-based system capable of reasoning over

  19. A Test-Bed Configuration: Toward an Autonomous System

    NASA Astrophysics Data System (ADS)

    Ocaña, F.; Castillo, M.; Uranga, E.; Ponz, J. D.; TBT Consortium

    2015-09-01

    In the context of the Space Situational Awareness (SSA) program of ESA, it is foreseen to deploy several large robotic telescopes in remote locations to provide surveillance and tracking services for man-made as well as natural near-Earth objects (NEOs). The present project, termed Telescope Test Bed (TBT) is being developed under ESA's General Studies and Technology Programme, and shall implement a test-bed for the validation of an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor NEO services. In order to fulfill all the security requirements for the TBT project, the use of a autonomous emergency system (AES) is foreseen to monitor the control system. The AES will monitor remotely the health of the observing system and the internal and external environment. It will incorporate both autonomous and interactive actuators to force the protection of the system (i.e., emergency dome close out).

  20. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Federal Aviation Administration Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 213, Enhanced Flight Vision... of the twentieth meeting of the RTCA Special Committee 213, Enhanced Flight Vision...

  1. 77 FR 36331 - Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... Federal Aviation Administration Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 213, Enhanced Flight Vision... of the nineteenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...

  2. Multiple-channel Streaming Delivery for Omnidirectional Vision System

    NASA Astrophysics Data System (ADS)

    Iwai, Yoshio; Nagahara, Hajime; Yachida, Masahiko

    An omnidirectional vision is an imaging system that can capture a surrounding image in whole direction by using a hyperbolic mirror and a conventional CCD camera. This paper proposes a streaming server that can efficiently transfer movies captured by an omnidirectional vision system through the Internet. The proposed system uses multiple channels to deliver multiple movies synchronously. Through this method, the system enables clients to view the different direction of omnidirectional movies and also support the function to change the view are during playback period. Our evaluation experiments show that our proposed streaming server can effectively deliver multiple movies via multiple channels.

  3. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  4. Mathematical biomarkers for the autonomic regulation of cardiovascular system

    PubMed Central

    Campos, Luciana A.; Pereira, Valter L.; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-01-01

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance. PMID:24109456

  5. Machine vision system for online inspection of freshly slaughtered chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  6. Machine vision system for online wholesomeness inspection of poultry carcasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A line-scan machine vision system and multispectral inspection algorithm were developed and evaluated for differentiation of wholesome and systemically diseased chickens on a high-speed processing line. The inspection system acquires line-scan images of chicken carcasses on a 140 bird-per-minute pro...

  7. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  8. A modular real-time vision system for humanoid robots

    NASA Astrophysics Data System (ADS)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  9. Cloud Absorption Radiometer Autonomous Navigation System - CANS

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Gatebe, Charles; McCune, Bill; Hellwig, Dustan

    2013-01-01

    CAR (cloud absorption radiometer) acquires spatial reference data from host aircraft navigation systems. This poses various problems during CAR data reduction, including navigation data format, accuracy of position data, accuracy of airframe inertial data, and navigation data rate. Incorporating its own navigation system, which included GPS (Global Positioning System), roll axis inertia and rates, and three axis acceleration, CANS expedites data reduction and increases the accuracy of the CAR end data product. CANS provides a self-contained navigation system for the CAR, using inertial reference and GPS positional information. The intent of the software application was to correct the sensor with respect to aircraft roll in real time based upon inputs from a precision navigation sensor. In addition, the navigation information (including GPS position), attitude data, and sensor position details are all streamed to a remote system for recording and later analysis. CANS comprises a commercially available inertial navigation system with integral GPS capability (Attitude Heading Reference System AHRS) integrated into the CAR support structure and data system. The unit is attached to the bottom of the tripod support structure. The related GPS antenna is located on the P-3 radome immediately above the CAR. The AHRS unit provides a RS-232 data stream containing global position and inertial attitude and velocity data to the CAR, which is recorded concurrently with the CAR data. This independence from aircraft navigation input provides for position and inertial state data that accounts for very small changes in aircraft attitude and position, sensed at the CAR location as opposed to aircraft state sensors typically installed close to the aircraft center of gravity. More accurate positional data enables quicker CAR data reduction with better resolution. The CANS software operates in two modes: initialization/calibration and operational. In the initialization/calibration mode

  10. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  11. [State of the autonomic nervous system after induced abortion in the lst trimester].

    PubMed

    Bakuleva, L P; Gatina, G A; Kuz'mina, T I; Solov'eva, A D

    1990-04-01

    The autonomic nervous system has been examined in 271 patients with a history of first-trimester induced abortion. It was ascertained that induced abortion affected the autonomic nervous system, thus impairing adaptive potentials and entailing the onset or aggravation of preexisting autonomic vascular dystonia. PMID:2378404

  12. Batteries for autonomous renewable energy systems

    NASA Astrophysics Data System (ADS)

    Sheridan, Norman R.

    Now that the Coconut Island plant has been running successfully for three years, it is appropriate to review the design decisions that were made with regard to the battery and to consider how these might be changed for future systems. The following aspects are discussed: type, package, energy storage, voltage, parallel operation, installation, charging, watering, life and quality assurance.

  13. Control Problems in Autonomous Life Support Systems

    NASA Technical Reports Server (NTRS)

    Colombano, S.

    1982-01-01

    The problem of constructing life support systems which require little or no input of matter (food and gases) for long, or even indefinite, periods of time is addressed. Natural control in ecosystems, a control theory for ecosystems, and an approach to the design of an ALSS are addressed.

  14. Large minimal period orbits of periodic autonomous systems

    NASA Astrophysics Data System (ADS)

    Campos, Juan; Tarallo, Massimo

    2004-01-01

    We prove the existence of periodic orbits with minimal period greater than any prescribed number for a natural Lagrangian autonomous system in several variables that is analytic and periodic in each variable and whose potential is nonconstant. Work supported by Acción Integrada Italia-España HI2000-0112, Azione Integrata Italia-Spagna IT-117, MCYT BFM2002-01308, Spain.

  15. Robotic reactions: delay-induced patterns in autonomous vehicle systems.

    PubMed

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation. PMID:20365620

  16. Robotic reactions: Delay-induced patterns in autonomous vehicle systems

    NASA Astrophysics Data System (ADS)

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  17. Autonomous Systems and Robotics: 2000-2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes technologies to monitor, maintain, and where possible, repair complex space systems. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.

  18. Binocular stereo vision system design for lunar rover

    NASA Astrophysics Data System (ADS)

    Chu, Jun; Jiao, Chunlin; Guo, Hang; Zhang, Xiaoyu

    2007-11-01

    In this paper, we integrate a pair of CCD cameras and a digital pan/title of two degrees of freedom into a binocular stereo vision system, which simulates the panoramic cameras system of the lunar rover. The constraints for placement and parameters choice of the stereo cameras pair are proposed based on science objective of Chang'e-IImission. And then these constraints are applied to our binocular stereo vision system and analyzed the location precise of it. Simulation and experimental result confirm the constraints proposed and the analysis of the location precise.

  19. Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems

    NASA Technical Reports Server (NTRS)

    Bujorianu, Marius C.; Bujorianu, Manuela L.

    2009-01-01

    In this paper, we sketch a framework for interdisciplinary modeling of space systems, by proposing a holistic view. We consider different system dimensions and their interaction. Specifically, we study the interactions between computation, physics, communication, uncertainty and autonomy. The most comprehensive computational paradigm that supports a holistic perspective on autonomous space systems is given by cyber-physical systems. For these, the state of art consists of collaborating multi-engineering efforts that prompt for an adequate formal foundation. To achieve this, we propose a leveraging of the traditional content of formal modeling by a co-engineering process.

  20. On non-autonomous dynamical systems

    SciTech Connect

    Anzaldo-Meneses, A.

    2015-04-15

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  1. On non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2015-04-01

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  2. Disorders of the Autonomic Nervous System after Hemispheric Cerebrovascular Disorders: An Update

    PubMed Central

    Al-Qudah, Zaid A.; Yacoub, Hussam A.; Souayah, Nizar

    2015-01-01

    Autonomic and cardiac dysfunction may occur after vascular brain injury without any evidence of primary heart disease. During acute stroke, autonomic dysfunction, for example, elevated arterial blood pressure, arrhythmia, and ischemic cardiac damage, has been reported, which may hinder the prognosis. Autonomic dysfunction after a stroke may involve the cardiovascular, respiratory, sudomotor, and sexual systems, but the exact mechanism is not fully understood. In this review paper, we will discuss the anatomy and physiology of the autonomic nervous system and discuss the mechanism(s) suggested to cause autonomic dysfunction after stroke. We will further elaborate on the different cerebral regions involved in autonomic dysfunction complications of stroke. Autonomic nervous system modulation is emerging as a new therapeutic target for stroke management. Understanding the pathogenesis and molecular mechanism(s) of parasympathetic and sympathetic dysfunction after stroke will facilitate the implementation of preventive and therapeutic strategies to antagonize the clinical manifestation of autonomic dysfunction and improve the outcome of stroke. PMID:26576215

  3. Disorders of the Autonomic Nervous System after Hemispheric Cerebrovascular Disorders: An Update.

    PubMed

    Al-Qudah, Zaid A; Yacoub, Hussam A; Souayah, Nizar

    2015-10-01

    Autonomic and cardiac dysfunction may occur after vascular brain injury without any evidence of primary heart disease. During acute stroke, autonomic dysfunction, for example, elevated arterial blood pressure, arrhythmia, and ischemic cardiac damage, has been reported, which may hinder the prognosis. Autonomic dysfunction after a stroke may involve the cardiovascular, respiratory, sudomotor, and sexual systems, but the exact mechanism is not fully understood. In this review paper, we will discuss the anatomy and physiology of the autonomic nervous system and discuss the mechanism(s) suggested to cause autonomic dysfunction after stroke. We will further elaborate on the different cerebral regions involved in autonomic dysfunction complications of stroke. Autonomic nervous system modulation is emerging as a new therapeutic target for stroke management. Understanding the pathogenesis and molecular mechanism(s) of parasympathetic and sympathetic dysfunction after stroke will facilitate the implementation of preventive and therapeutic strategies to antagonize the clinical manifestation of autonomic dysfunction and improve the outcome of stroke. PMID:26576215

  4. Autonomic nervous system correlates in movement observation and motor imagery

    PubMed Central

    Collet, C.; Di Rienzo, F.; El Hoyek, N.; Guillot, A.

    2013-01-01

    The purpose of the current article is to provide a comprehensive overview of the literature offering a better understanding of the autonomic nervous system (ANS) correlates in motor imagery (MI) and movement observation. These are two high brain functions involving sensori-motor coupling, mediated by memory systems. How observing or mentally rehearsing a movement affect ANS activity has not been extensively investigated. The links between cognitive functions and ANS responses are not so obvious. We will first describe the organization of the ANS whose main purposes are controlling vital functions by maintaining the homeostasis of the organism and providing adaptive responses when changes occur either in the external or internal milieu. We will then review how scientific knowledge evolved, thus integrating recent findings related to ANS functioning, and show how these are linked to mental functions. In turn, we will describe how movement observation or MI may elicit physiological responses at the peripheral level of the autonomic effectors, thus eliciting autonomic correlates to cognitive activity. Key features of this paper are to draw a step-by step progression from the understanding of ANS physiology to its relationships with high mental processes such as movement observation or MI. We will further provide evidence that mental processes are co-programmed both at the somatic and autonomic levels of the central nervous system (CNS). We will thus detail how peripheral physiological responses may be analyzed to provide objective evidence that MI is actually performed. The main perspective is thus to consider that, during movement observation and MI, ANS activity is an objective witness of mental processes. PMID:23908623

  5. Flight Control System Development for the BURRO Autonomous UAV

    NASA Technical Reports Server (NTRS)

    Colbourne, Jason D.; Frost, Chad R.; Tischler, Mark B.; Ciolani, Luigi; Sahai, Ranjana; Tomoshofski, Chris; LaMontagne, Troy; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    Developing autonomous flying vehicles has been a growing field in aeronautical research within the last decade and will continue into the next century. With concerns about safety, size, and cost of manned aircraft, several autonomous vehicle projects are currently being developed; uninhabited rotorcraft offer solutions to requirements for hover, vertical take-off and landing, as well as slung load transportation capabilities. The newness of the technology requires flight control engineers to question what design approaches, control law architectures, and performance criteria apply to control law development and handling quality evaluation. To help answer these questions, this paper documents the control law design process for Kaman Aerospace BURRO project. This paper will describe the approach taken to design control laws and develop math models which will be used to convert the manned K-MAX into the BURRO autonomous rotorcraft. With the ability of the K-MAX to lift its own weight (6000 lb) the load significantly affects the dynamics of the system; the paper addresses the additional design requirements for slung load autonomous flight. The approach taken in this design was to: 1) generate accurate math models of the K-MAX helicopter with and without slung loads, 2) select design specifications that would deliver good performance as well as satisfy mission criteria, and 3) develop and tune the control system architecture to meet the design specs and mission criteria. An accurate math model was desired for control system development. The Comprehensive Identification from Frequency Responses (CIFER(R)) software package was used to identify a linear math model for unloaded and loaded flight at hover, 50 kts, and 100 kts. The results of an eight degree-of-freedom CIFER(R)-identified linear model for the unloaded hover flight condition are presented herein, and the identification of the two-body slung-load configuration is in progress.

  6. Analysis of the development and the prospects about vehicular infrared night vision system

    NASA Astrophysics Data System (ADS)

    Li, Jing; Fan, Hua-ping; Xie, Zu-yun; Zhou, Xiao-hong; Yu, Hong-qiang; Huang, Hui

    2013-08-01

    Through the classification of vehicular infrared night vision system and comparing the mainstream vehicle infrared night vision products, we summarized the functions of vehicular infrared night vision system which conclude night vision, defogging , strong-light resistance and biological recognition. At the same time , the vehicular infrared night vision system's markets of senior car and fire protection industry were analyzed。Finally, the conclusion was given that vehicle infrared night vision system would be used as a safety essential active safety equipment to promote the night vision photoelectric industry and automobile industry.

  7. Autonomous Control Capabilities for Space Reactor Power Systems

    SciTech Connect

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-02-04

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission.

  8. Autonomous Control Capabilities for Space Reactor Power Systems

    NASA Astrophysics Data System (ADS)

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-02-01

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission.

  9. Autonomous system for pathogen detection and identification

    NASA Astrophysics Data System (ADS)

    Belgrader, Philip; Benett, William J.; Bergman, Werner; Langlois, Richard G.; Mariella, Raymond P., Jr.; Milanovich, Fred P.; Miles, Robin R.; Venkateswaran, Kodumudi; Long, Gary; Nelson, William

    1999-01-01

    The purpose of this project is to build a prototype instrument that will, running unattended, detect, identify, and quantify BW agents. In order to accomplish this, we have chosen to start with the world's leading, proven assays for pathogens: surface-molecular recognition assays, such as antibody-based assays, implemented on a high-performance, identification (ID)-capable flow cytometer, and the polymerase chain reaction for nucleic-acid based assays. With these assays, we must integrate the capability to: (1) collect samples form aerosols, water, or surface; (2) perform sample preparation prior to the assays; (3) incubate the prepared samples, if necessary, for a period of time; (4) transport the prepared, incubated samples to the assays; (5) perform the assays; (6) interpret and report the result of the assays. Issues such as reliability, sensitivity and accuracy, quantify of consumables, maintenance schedule, etc. must be addressed satisfactorily to the end user. The highest possible sensitivity and specificity of the assay must be combined with no false alarms. Today, we have assays that can, in under 30 minutes, detect and identify simulants for BW agents at concentrations of a few hundred colony- forming units per ml of solution. If the bio-aerosol sampler of this system collects 1000 1/min and concentrates the respirable particles into 1 ml of solution with 70 percent processing efficiency over a period of 5 minutes, then this translates to a detection/ID capability of under 0.1 agent- containing particle/liter of air.

  10. Autonomous system for pathogen detection and identification

    SciTech Connect

    Belgrader, P.; Benett, W.; Bergman, W.; Langlois, R.; Mariella, R.; Milanovich, F.; Miles, R.; Venkateswaran, K.; Long, G.; Nelson, W.

    1998-09-24

    This purpose of this project is to build a prototype instrument that will, running unattended, detect, identify, and quantify BW agents. In order to accomplish this, we have chosen to start with the world' s leading, proven, assays for pathogens: surface-molecular recognition assays, such as antibody-based assays, implemented on a high-performance, identification (ID)-capable flow cytometer, and the polymerase chain reaction (PCR) for nucleic-acid based assays. With these assays, we must integrate the capability to: l collect samples from aerosols, water, or surfaces; l perform sample preparation prior to the assays; l incubate the prepared samples, if necessary, for a period of time; l transport the prepared, incubated samples to the assays; l perform the assays; l interpret and report the results of the assays. Issues such as reliability, sensitivity and accuracy, quantity of consumables, maintenance schedule, etc. must be addressed satisfactorily to the end user. The highest possible sensitivity and specificity of the assay must be combined with no false alarms. Today, we have assays that can, in under 30 minutes, detect and identify simulants for BW agents at concentrations of a few hundred colony-forming units per ml of solution. If the bio-aerosol sampler of this system collects 1000 Ymin and concentrates the respirable particles into 1 ml of solution with 70% processing efficiency over a period of 5 minutes, then this translates to a detection/ID capability of under 0.1 agent-containing particle/liter of air.

  11. Low light level CMOS sensor for night vision systems

    NASA Astrophysics Data System (ADS)

    Gross, Elad; Ginat, Ran; Nesher, Ofer

    2015-05-01

    For many years image intensifier tubes were used for night vision systems. In 2014, Elbit systems developed a digital low-light level CMOS sensor, with similar sensitivity to a Gen II image-intensifiers, down to starlight conditions. In this work we describe: the basic principle behind this sensor, physical model for low-light performance estimation and results of field testing.

  12. System safety analysis of an autonomous mobile robot

    SciTech Connect

    Bartos, R.J.

    1994-08-01

    Analysis of the safety of operating and maintaining the Stored Waste Autonomous Mobile Inspector (SWAMI) II in a hazardous environment at the Fernald Environmental Management Project (FEMP) was completed. The SWAMI II is a version of a commercial robot, the HelpMate{trademark} robot produced by the Transitions Research Corporation, which is being updated to incorporate the systems required for inspecting mixed toxic chemical and radioactive waste drums at the FEMP. It also has modified obstacle detection and collision avoidance subsystems. The robot will autonomously travel down the aisles in storage warehouses to record images of containers and collect other data which are transmitted to an inspector at a remote computer terminal. A previous study showed the SWAMI II has economic feasibility. The SWAMI II will more accurately locate radioactive contamination than human inspectors. This thesis includes a System Safety Hazard Analysis and a quantitative Fault Tree Analysis (FTA). The objectives of the analyses are to prevent potentially serious events and to derive a comprehensive set of safety requirements from which the safety of the SWAMI II and other autonomous mobile robots can be evaluated. The Computer-Aided Fault Tree Analysis (CAFTA{copyright}) software is utilized for the FTA. The FTA shows that more than 99% of the safety risk occurs during maintenance, and that when the derived safety requirements are implemented the rate of serious events is reduced to below one event per million operating hours. Training and procedures in SWAMI II operation and maintenance provide an added safety margin. This study will promote the safe use of the SWAMI II and other autonomous mobile robots in the emerging technology of mobile robotic inspection.

  13. Autonomous and Autonomic Swarms

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Truszkowski, Walter F.; Rouff, Christopher A.; Sterritt, Roy

    2005-01-01

    A watershed in systems engineering is represented by the advent of swarm-based systems that accomplish missions through cooperative action by a (large) group of autonomous individuals each having simple capabilities and no global knowledge of the group s objective. Such systems, with individuals capable of surviving in hostile environments, pose unprecedented challenges to system developers. Design and testing and verification at much higher levels will be required, together with the corresponding tools, to bring such systems to fruition. Concepts for possible future NASA space exploration missions include autonomous, autonomic swarms. Engineering swarm-based missions begins with understanding autonomy and autonomicity and how to design, test, and verify systems that have those properties and, simultaneously, the capability to accomplish prescribed mission goals. Formal methods-based technologies, both projected and in development, are described in terms of their potential utility to swarm-based system developers.

  14. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  15. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  16. Brain, mind, body and society: autonomous system in robotics.

    PubMed

    Shimoda, Motomu

    2013-12-01

    In this paper I examine the issues related to the robot with mind. To create a robot with mind aims to recreate neuro function by engineering. The robot with mind is expected not only to process external information by the built-in program and behave accordingly, but also to gain the consciousness activity responding multiple conditions and flexible and interactive communication skills coping with unknown situation. That prospect is based on the development of artificial intelligence in which self-organizing and self-emergent functions have been available in recent years. To date, controllable aspects in robotics have been restricted to data making and programming of cognitive abilities, while consciousness activities and communication skills have been regarded as uncontrollable aspects due to their contingency and uncertainty. However, some researchers of robotics claim that every activity of the mind can be recreated by engineering and is therefore controllable. Based on the development of the cognitive abilities of children and the findings of neuroscience, researchers have attempted to produce the latest artificial intelligence with autonomous learning systems. I conclude that controllability is inconsistent with autonomy in the genuine sense and autonomous robots recreated by engineering cannot be autonomous partners of humans. PMID:24558734

  17. The role of the autonomic nervous system in Tourette Syndrome

    PubMed Central

    Hawksley, Jack; Cavanna, Andrea E.; Nagai, Yoko

    2015-01-01

    Tourette Syndrome (TS) is a neurodevelopmental disorder, consisting of multiple involuntary movements (motor tics) and one or more vocal (phonic) tics. It affects up to one percent of children worldwide, of whom about one third continue to experience symptoms into adulthood. The central neural mechanisms of tic generation are not clearly understood, however recent neuroimaging investigations suggest impaired cortico-striato-thalamo-cortical activity during motor control. In the current manuscript, we will tackle the relatively under-investigated role of the peripheral autonomic nervous system, and its central influences, on tic activity. There is emerging evidence that both sympathetic and parasympathetic nervous activity influences tic expression. Pharmacological treatments which act on sympathetic tone are often helpful: for example, Clonidine (an alpha-2 adrenoreceptor agonist) is often used as first choice medication for treating TS in children due to its good tolerability profile and potential usefulness for co-morbid attention-deficit and hyperactivity disorder. Clonidine suppresses sympathetic activity, reducing the triggering of motor tics. A general elevation of sympathetic tone is reported in patients with TS compared to healthy people, however this observation may reflect transient responses coupled to tic activity. Thus, the presence of autonomic impairments in patients with TS remains unclear. Effect of autonomic afferent input to cortico-striato-thalamo-cortical circuit will be discussed schematically. We additionally review how TS is affected by modulation of central autonomic control through biofeedback and Vagus Nerve Stimulation (VNS). Biofeedback training can enable a patient to gain voluntary control over covert physiological responses by making these responses explicit. Electrodermal biofeedback training to elicit a reduction in sympathetic tone has a demonstrated association with reduced tic frequency. VNS, achieved through an implanted device

  18. The role of the autonomic nervous system in Tourette Syndrome.

    PubMed

    Hawksley, Jack; Cavanna, Andrea E; Nagai, Yoko

    2015-01-01

    Tourette Syndrome (TS) is a neurodevelopmental disorder, consisting of multiple involuntary movements (motor tics) and one or more vocal (phonic) tics. It affects up to one percent of children worldwide, of whom about one third continue to experience symptoms into adulthood. The central neural mechanisms of tic generation are not clearly understood, however recent neuroimaging investigations suggest impaired cortico-striato-thalamo-cortical activity during motor control. In the current manuscript, we will tackle the relatively under-investigated role of the peripheral autonomic nervous system, and its central influences, on tic activity. There is emerging evidence that both sympathetic and parasympathetic nervous activity influences tic expression. Pharmacological treatments which act on sympathetic tone are often helpful: for example, Clonidine (an alpha-2 adrenoreceptor agonist) is often used as first choice medication for treating TS in children due to its good tolerability profile and potential usefulness for co-morbid attention-deficit and hyperactivity disorder. Clonidine suppresses sympathetic activity, reducing the triggering of motor tics. A general elevation of sympathetic tone is reported in patients with TS compared to healthy people, however this observation may reflect transient responses coupled to tic activity. Thus, the presence of autonomic impairments in patients with TS remains unclear. Effect of autonomic afferent input to cortico-striato-thalamo-cortical circuit will be discussed schematically. We additionally review how TS is affected by modulation of central autonomic control through biofeedback and Vagus Nerve Stimulation (VNS). Biofeedback training can enable a patient to gain voluntary control over covert physiological responses by making these responses explicit. Electrodermal biofeedback training to elicit a reduction in sympathetic tone has a demonstrated association with reduced tic frequency. VNS, achieved through an implanted device

  19. The impact of changing night vision goggle spectral response on night vision imaging system lighting compatibility

    NASA Astrophysics Data System (ADS)

    Task, Harry L.; Marasco, Peter L.

    2004-09-01

    The defining document outlining night-vision imaging system (NVIS) compatible lighting, MIL-L-85762A, was written in the mid 1980's, based on what was then the state of the art in night vision and image intensification. Since that time there have been changes in the photocathode sensitivity and the minus-blue coatings applied to the objective lenses. Specifically, many aviation night-vision goggles (NVGs) in the Air Force are equipped with so-called "leaky green" or Class C type objective lens coatings that provide a small amount of transmission around 545 nanometers so that the displays that use a P-43 phosphor can be seen through the NVGs. However, current NVIS compatibility requirements documents have not been updated to include these changes. Documents that followed and replaced MIL-L-85762A (ASC/ENFC-96-01 and MIL-STD-3009) addressed aspects of then current NVIS technology, but did little to change the actual content or NVIS radiance requirements set forth in the original MIL-L-85762A. This paper examines the impact of spectral response changes, introduced by changes in image tube parameters and objective lens minus-blue filters, on NVIS compatibility and NVIS radiance calculations. Possible impact on NVIS lighting requirements is also discussed. In addition, arguments are presented for revisiting NVIS radiometric unit conventions.

  20. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...''). 74 FR 34589-90 (July 16, 2009). The complaint alleged violations of section 337 of the Tariff Act of...) under review. 75 FR 60478-80 (September 30, 2010). On October 8 and 15, 2010, respectively, complainants... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products...

  1. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  2. Role of the autonomic nervous system in tumorigenesis and metastasis

    PubMed Central

    Magnon, Claire

    2015-01-01

    Convergence of multiple stromal cell types is required to develop a tumorigenic niche that nurtures the initial development of cancer and its dissemination. Although the immune and vascular systems have been shown to have strong influences on cancer, a growing body of evidence points to a role of the nervous system in promoting cancer development. This review discusses past and current research that shows the intriguing role of autonomic nerves, aided by neurotrophic growth factors and axon cues, in creating a favorable environment for the promotion of tumor formation and metastasis.

  3. Component-Oriented Behavior Extraction for Autonomic System Design

    NASA Technical Reports Server (NTRS)

    Bakera, Marco; Wagner, Christian; Margaria,Tiziana; Hinchey, Mike; Vassev, Emil; Steffen, Bernhard

    2009-01-01

    Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate properties of the ESA Exo- Mars Rover in depth. We show here how to enable GEAR s game-based verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission.

  4. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  5. Systems, methods and apparatus for quiesence of autonomic safety devices with self action

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments an autonomic environmental safety device may be quiesced. In at least one embodiment, a method for managing an autonomic safety device, such as a smoke detector, based on functioning state and operating status of the autonomic safety device includes processing received signals from the autonomic safety device to obtain an analysis of the condition of the autonomic safety device, generating one or more stay-awake signals based on the functioning status and the operating state of the autonomic safety device, transmitting the stay-awake signal, transmitting self health/urgency data, and transmitting environment health/urgency data. A quiesce component of an autonomic safety device can render the autonomic safety device inactive for a specific amount of time or until a challenging situation has passed.

  6. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  7. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  8. Development of a machine vision guidance system for automated assembly of space structures

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Sydow, P. Daniel

    1992-01-01

    The topics are presented in viewgraph form and include: automated structural assembly robot vision; machine vision requirements; vision targets and hardware; reflective efficiency; target identification; pose estimation algorithms; triangle constraints; truss node with joint receptacle targets; end-effector mounted camera and light assembly; vision system results from optical bench tests; and future work.

  9. Distributed autonomous systems: resource management, planning, and control algorithms

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Nguyen, ThanhVu H.

    2005-05-01

    Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.

  10. Supervised autonomous rendezvous and docking system technology evaluation

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.

    1991-01-01

    Technology for manned space flight is mature and has an extensive history of the use of man-in-the-loop rendezvous and docking, but there is no history of automated rendezvous and docking. Sensors exist that can operate in the space environment. The Shuttle radar can be used for ranges down to 30 meters, Japan and France are developing laser rangers, and considerable work is going on in the U.S. However, there is a need to validate a flight qualified sensor for the range of 30 meters to contact. The number of targets and illumination patterns should be minimized to reduce operation constraints with one or more sensors integrated into a robust system for autonomous operation. To achieve system redundancy, it is worthwhile to follow a parallel development of qualifying and extending the range of the 0-12 meter MSFC sensor and to simultaneously qualify the 0-30(+) meter JPL laser ranging system as an additional sensor with overlapping capabilities. Such an approach offers a redundant sensor suite for autonomous rendezvous and docking. The development should include the optimization of integrated sensory systems, packaging, mission envelopes, and computer image processing to mimic brain perception and real-time response. The benefits of the Global Positioning System in providing real-time positioning data of high accuracy must be incorporated into the design. The use of GPS-derived attitude data should be investigated further and validated.

  11. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    SciTech Connect

    Pin, F.G.; de Saussure, G.; Spelt, P.F.; Killough, S.M.; Weisbin, C.R.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioning of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.

  12. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  13. Autonomous Operation of the Nanosatellite URSA MAIOR Micropropulsion System

    NASA Astrophysics Data System (ADS)

    Santoni, F.

    Università degli Studi di Roma "La Sapienza", Scuola di Ingegneria Aerospaziale, Via Eudossiana 16, 00184 At Università di Roma "La Sapienza" a nanosatellite bus is under development, with one liter target volume and one kilogram target weight. This nanosatellite, called URSA MAIOR (Università di Roma "la SApienza" Micro Autonomous Imager in ORbit) has a micro camera on board to take pictures of the Earth. The nanosatellite is three axis stabilized, using a micro momentum wheel, with magnetic coils for active nutation damping and pointing control. An experimental micropropulsion system is present on-board, together with the magnetic attitude control system. The design, construction and testing of the satellite is carried on by academic personnel and by students, which are directly involved in the whole process, as it is in the spirit of in the microsatellite program at Università di Roma "La Sapienza". Few technological payloads are present on-board: an Earth imaging experiment, using a few grams commercial-off-the-shelf microcamera; commercial Li-Ion batteries are the only energy storage device; a microwheel developed at our University laboratories provides for attitude stabilization. In addition, a micropropulsion experiment is planned on-board. The Austrian Company Mechatronic, and INFM, an Italian Research Institute at Trieste are developing a microthruster for nanosatelite applications. In the frame of a cooperation established between these two Institutions and Università di Roma "La Sapienza", this newly developed hardware will be tested in orbit. The thruster is made basically of an integrated microvalve, built on a silicon chip, and a micronozzle, etched on the same silicon chip, to get supersonic expansion of the gas flow. The nominal thrust of the system is about 100microN. The throat section is about 100 micron diameter. The first phase in the construction of the microthruster has been the construction of the micronozzle on a silicon chip. A

  14. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  15. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  16. G2 Autonomous Control for Cryogenic Delivery Systems

    NASA Technical Reports Server (NTRS)

    Dito, Scott J.

    2014-01-01

    The Independent System Health Management-Autonomous Control (ISHM-AC) application development for cryogenic delivery systems is intended to create an expert system that will require minimal operator involvement and ultimately allow for complete autonomy when fueling a space vehicle in the time prior to launch. The G2-Autonomous Control project is the development of a model, simulation, and ultimately a working application that will control and monitor the cryogenic fluid delivery to a rocket for testing purposes. To develop this application, the project is using the programming language/environment Gensym G2. The environment is an all-inclusive application that allows development, testing, modeling, and finally operation of the unique application through graphical and programmatic methods. We have learned G2 through training classes and subsequent application development, and are now in the process of building the application that will soon be used to test on cryogenic loading equipment here at the Kennedy Space Center Cryogenics Test Laboratory (CTL). The G2 ISHM-AC application will bring with it a safer and more efficient propellant loading system for the future launches at Kennedy Space Center and eventually mobile launches from all over the world.

  17. Autonomous Pathogen Detection System - FY02 Annual Progress Report

    SciTech Connect

    Colston, B; Brown, S; Burris, K; Elkin, C; Hindson, B; Langlois, R; Masquelier, D; McBride, M; Metz, T; Nasarabadi, S; Makarewicz, T; Milznovich, F; Venkateswaran, K S; Visuri, S

    2002-11-11

    The objective of this project is to design, fabricate and field demonstrate a biological agent detection and identification capability, the Autonomous Pathogen Detector System (APDS). Integrating a flow cytometer and real-time polymerase chain reaction (PCR) detector with sample collection, sample preparation and fluidics will provide a compact, autonomously operating instrument capable of simultaneously detecting multiple pathogens and/or toxins. The APDS will operate in fixed locations, continuously monitoring air samples and automatically reporting the presence of specific biological agents. The APDS will utilize both multiplex immunoassays and nucleic acid assays to provide ''quasi-orthogonal'' multiple agent detection approaches to minimize false positives and increase the reliability of identification. Technical advances across several fronts must occur, however, to realize the full extent of the APDS. The end goal of a commercially available system for civilian biological weapon defense will be accomplished through three progressive generations of APDS instruments. The APDS is targeted for civilian applications in which the public is at high risk of exposure to covert releases of bioagent, such as major subway systems and other transportation terminals, large office complexes and convention centers. APDS is also designed to be part of a monitoring network of sensors integrated with command and control systems for wide-area monitoring of urban areas and major public gatherings. In this latter application there is potential that a fully developed APDS could add value to DoD monitoring architectures.

  18. The Spacecraft Emergency Response System (SERS) for Autonomous Mission Operations

    NASA Technical Reports Server (NTRS)

    Breed, Julia; Chu, Kai-Dee; Baker, Paul; Starr, Cynthia; Fox, Jeffrey; Baitinger, Mick

    1998-01-01

    Today, most mission operations are geared toward lowering cost through unmanned operations. 7-day/24-hour operations are reduced to either 5-day/8-hour operations or become totally autonomous, especially for deep-space missions. Proper and effective notification during a spacecraft emergency could mean success or failure for an entire mission. The Spacecraft Emergency Response System (SERS) is a tool designed for autonomous mission operations. The SERS automatically contacts on-call personnel as needed when crises occur, either on-board the spacecraft or within the automated ground systems. Plus, the SERS provides a group-ware solution to facilitate the work of the person(s) contacted. The SERS is independent of the spacecraft's automated ground system. It receives and catalogues reports for various ground system components in near real-time. Then, based on easily configurable parameters, the SERS determines whom, if anyone, should be alerted. Alerts may be issued via Sky-Tel 2-way pager, Telehony, or e-mail. The alerted personnel can then review and respond to the spacecraft anomalies through the Netscape Internet Web Browser, or directly review and respond from the Sky-Tel 2-way pager.

  19. Verification and Validation of Model-Based Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles; Koga, Dennis (Technical Monitor)

    2001-01-01

    This paper presents a three year project (FY99 to FY01) on the verification and validation of model based autonomous systems. The topics include: 1) Project Profile; 2) Model-Based Autonomy; 3) The Livingstone MIR; 4) MPL2SMV; 5) Livingstone to SMV Translation; 6) Symbolic Model Checking; 7) From Livingstone Models to SMV Models; 8) Application In-Situ Propellant Production; 9) Closed-Loop Verification Principle; 10) Livingstone PathFinder (LPF); 11) Publications and Presentations; and 12) Future Directions. This paper is presented in viewgraph form.

  20. Autonomous, teleoperated, and shared control of robot systems

    SciTech Connect

    Anderson, R.J.

    1994-12-31

    This paper illustrates how different modes of operation such as bilateral teleoperation, autonomous control, and shared control can be described and implemented using combinations of modules in the SMART robot control architecture. Telerobotics modes are characterized by different ``grids`` of SMART icons, where each icon represents a portion of run-time code that implements a passive control law. By placing strict requirements on the module`s input-output behavior and using scattering theory to develop a passive sampling technique, a flexible, expandable telerobot architecture is achieved. An automatic code generation tool for generating SMART systems is also described.

  1. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Federal Aviation Administration Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions... of Transportation (DOT). ACTION: Notice of meeting RTCA Special Committee 213, Enhanced Flight... public of the eighteenth meeting of RTCA Special Committee 213, Enhanced Flight Visions...

  2. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  3. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Huegli, Heinz

    2001-01-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits us to accommodate a broad range of inspection requirements, is, however, limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high-dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented. They show the effectiveness of the presented approach, and validate the configuration assistant as well.

  4. Head-aimed vision system improves tele-operated mobility

    NASA Astrophysics Data System (ADS)

    Massey, Kent

    2004-12-01

    A head-aimed vision system greatly improves the situational awareness and decision speed for tele-operations of mobile robots. With head-aimed vision, the tele-operator wears a head-mounted display and a small three axis head-position measuring device. Wherever the operator looks, the remote sensing system "looks". When the system is properly designed, the operator's occipital lobes are "fooled" into believing that the operator is actually on the remote robot. The result is at least a doubling of: situational awareness, threat identification speed, and target tracking ability. Proper system design must take into account: precisely matching fields of view; optical gain; and latency below 100 milliseconds. When properly designed, a head-aimed system does not cause nausea, even with prolonged use.

  5. Multistrategy machine-learning vision system

    NASA Astrophysics Data System (ADS)

    Roberts, Barry A.

    1993-04-01

    Advances in the field of machine learning technology have yielded learning techniques with solid theoretical foundations that are applicable to the problems being encountered by object recognition systems. At Honeywell an object recognition system that works with high-level, symbolic, object features is under development. This system, named object recognition accomplished through combined learning expertise (ORACLE), employs both an inductive learning technique (i.e., conceptual clustering, CC) and a deductive technique (i.e., explanation-based learning, EBL) that are combined in a synergistic manner. This paper provides an overview of the ORACLE system, describes the machine learning mechanisms (EBL and CC) that it employs, and provides example results of system operation. The paper emphasizes the beneficial effect of integrating machine learning into object recognition systems.

  6. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  7. Non-autonomous lattice systems with switching effects and delayed recovery

    NASA Astrophysics Data System (ADS)

    Han, Xiaoying; Kloeden, Peter E.

    2016-09-01

    The long term behavior of a type of non-autonomous lattice dynamical systems is investigated, where these have a diffusive nearest neighborhood interaction and discontinuous reaction terms with recoverable delays. This problem is of both biological and mathematical interests, due to its application in systems of excitable cells as well as general biological systems involving delayed recovery. The problem is formulated as an evolution inclusion with delays and the existence of weak and strong solutions is established. It is then shown that the solutions generate a set-valued non-autonomous dynamical system and that this non-autonomous dynamical system possesses a non-autonomous global pullback attractor.

  8. A machine vision system for the calibration of digital thermometers

    NASA Astrophysics Data System (ADS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Martín, Fernando; Formella, Arno; Alvarez-Valado, Victor

    2009-06-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians.

  9. Survey of computer vision-based natural disaster warning systems

    NASA Astrophysics Data System (ADS)

    Ko, ByoungChul; Kwak, Sooyeong

    2012-07-01

    With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

  10. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  11. A novel container truck locating system based on vision technology

    NASA Astrophysics Data System (ADS)

    He, Junji; Shi, Li; Mi, Weijian

    2008-10-01

    On a container dock, the container truck must be parked right under the trolley of the container crane before loading (unloading) a container to (from) it. But it often uses nearly one minute to park the truck at the right position because of the difficulty of aiming the truck at the trolley. A monocular machine vision system is designed to locate the locomotive container truck, give the information about how long the truck need to go ahead or go back, and thereby help the driver park the truck fleetly and correctly. With this system time is saved and the efficiency of loading and unloading is increased. The mathematical model of this system is presented in detail. Then the calibration method is described. At last the experiment result testifies the validity and precision of this locating system. The prominent characteristic of this system is simple, easy to be implemented, low cost, and effective. Furthermore, this research work verifies that a monocular vision system can detect 3D size on condition that the length and width of a container are known, which greatly extends the function and application of a monocular vision system.

  12. Radar system on a large autonomous vehicle for personnel avoidance

    NASA Astrophysics Data System (ADS)

    Silvious, Jerry; Wellman, Ron; Tahmoush, Dave; Clark, John

    2010-04-01

    The US Army Research Laboratory designed, developed and tested a novel switched beam radar system operating at 76 GHz for use in a large autonomous vehicle to detect and identify roadway obstructions including slowly-moving personnel. This paper discusses the performance requirements for the system to operate in an early collision avoidance mode to a range of 150 meters and at speeds of over 20 m/s. We report the measured capabilities of the system to operate in these modes under various conditions, such as rural and urban environments, and on various terrains, such as asphalt and grass. Finally, we discuss the range-Doppler map processing capabilities that were developed to correct for platform motion and identify roadway vehicles and personnel moving at 1 m/s or more along the path of the system.

  13. Method and system for providing autonomous control of a platform

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)

    2012-01-01

    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).

  14. Autonomously acquiring declarative and procedural knowledge for ICAT systems

    NASA Technical Reports Server (NTRS)

    Kovarik, Vincent J., Jr.

    1993-01-01

    The construction of Intelligent Computer Aided Training (ICAT) systems is critically dependent on the ability to define and encode knowledge. This knowledge engineering effort can be broadly divided into two categories: domain knowledge and expert or task knowledge. Domain knowledge refers to the physical environment or system with which the expert interacts. Expert knowledge consists of the set of procedures and heuristics employed by the expert in performing their task. Both these areas are a significant bottleneck in the acquisition of knowledge for ICAT systems. This paper presents a research project in the area of autonomous knowledge acquisition using a passive observation concept. The system observes an expert and then generalizes the observations into production rules representing the domain expert's knowledge.

  15. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    PubMed Central

    2010-01-01

    Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic

  16. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published

  17. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  18. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  19. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  20. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban

    2005-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this project is to develop a technology for on-line corrosion monitoring based on a new concept. This objective is to be achieved by a laboratory development of the sensor and instrumentation, testing of the measurement system in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. The initial plan for testing at the coal-fired pilot-scale furnace was replaced by testing in a power plant, because the operation condition at the power plant is continuous and more stable. The first two-year effort was completed with the successful development sensor and measurement system, and successful testing in a muffle furnace. Because of the potential high cost in sensor fabrication, a different type of sensor was used and tested in a power plant burning eastern bituminous coals. This report summarize the experiences and results of the first two years of the three-year project, which include laboratory

  1. Development of a distributed vision system for industrial conditions

    NASA Astrophysics Data System (ADS)

    Weiss, Michael; Schiller, Arnulf; O'Leary, Paul; Fauster, Ewald; Schalk, Peter

    2003-04-01

    This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO"s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.

  2. A vision fusion treatment system based on ATtiny26L

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  3. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  4. Sensing, Control, and System Integration for Autonomous Vehicles: A Series of Challenges

    NASA Astrophysics Data System (ADS)

    Özgüner, Ümit; Redmill, Keith

    One of the important examples of mechatronic systems can be found in autonomous ground vehicles. Autonomous ground vehicles provide a series of challenges in sensing, control and system integration. In this paper we consider off-road autonomous vehicles, automated highway systems and urban autonomous driving and indicate the unifying aspects. We specifically consider our own experience during the last twelve years in various demonstrations and challenges in attempting to identify unifying themes. Such unifying themes can be observed in basic hierarchies, hybrid system control approaches and sensor fusion techniques.

  5. Forest fire autonomous decision system based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Lu, Jianhua

    2010-11-01

    The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.

  6. Forest fire autonomous decision system based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Lu, Jianhua

    2009-09-01

    The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.

  7. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  8. The Systemic Vision of the Educational Learning

    ERIC Educational Resources Information Center

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  9. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    SciTech Connect

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  10. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  11. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  12. A Vision For A Land Observing System

    NASA Astrophysics Data System (ADS)

    Lewis, P.; Gomez-Dans, J.; Disney, M.

    2013-12-01

    In this paper, we argue that the exploitation of EO land surface data for modelling and monitoring would be greatly facilitated by the routine generation of inter- operable low-level surface bidirectional reflectance factor (BRF) products. We consider evidence from a range of ESA, NASA and other products and studies as well as underlying research to outline the features such a processing system might have, and to define initial research priorities.

  13. Autonomous Flight Safety System September 27, 2005, Aircraft Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.

    2005-01-01

    This report describes the first aircraft test of the Autonomous Flight Safety System (AFSS). The test was conducted on September 27, 2005, near Kennedy Space Center (KSC) using a privately-owned single-engine plane and evaluated the performance of several basic flight safety rules using real-time data onboard a moving aerial vehicle. This test follows the first road test of AFSS conducted in February 2005 at KSC. AFSS is a joint KSC and Wallops Flight Facility (WEF) project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations. The mission rules are configured for each operation by the responsible Range Safety authorities and can be loosely categorized in four major categories: Parameter Threshold Violations, Physical Boundary Violations present position and instantaneous impact point (TIP), Gate Rules static and dynamic, and a Green-Time Rule. Examples of each of these rules were evaluated during this aircraft test.

  14. Extracting depth by binocular stereo in a robot vision system

    SciTech Connect

    Marapane, S.B.; Trivedi, M.M.

    1988-01-01

    New generation of robotic systems will operate in complex, unstructured environments utilizing sophisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such as: object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo technique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach. 16 refs., 1 fig.

  15. Low Temperature Shape Memory Alloys for Adaptive, Autonomous Systems Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Williams, Martha; Benafan, Othmane; Fesmire, James

    2015-01-01

    The objective of this joint activity between Kennedy Space Center (KSC) and Glenn Research Center (GRC) is to develop and evaluate the applicability of 2-way SMAs in proof-of-concept, low-temperature adaptive autonomous systems. As part of this low technology readiness (TRL) activity, we will develop and train low-temperature novel, 2-way shape memory alloys (SMAs) with actuation temperatures ranging from 0 C to 150 C. These experimental alloys will also be preliminary tested to evaluate their performance parameters and transformation (actuation) temperatures in low- temperature or cryogenic adaptive proof-of-concept systems. The challenge will be in the development, design, and training of the alloys for 2-way actuation at those temperatures.

  16. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  17. A Vision System For A Mars Rover

    NASA Astrophysics Data System (ADS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1987-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  18. Global Positioning System (GPS) advances in autonomous user system (Norway demonstration)

    NASA Astrophysics Data System (ADS)

    Ananda, Mohan P.; Bernstein, Harold; Feess, William A.; Kells, Ronald C.; Wortham, J. H.

    Using a new autonomous user (AU) system algorithm extends the AU system concept by permitting the use of a crystal frequency reference instead of an atomic reference. Results obtained using both crystal and atomic frequency references are presented. To supply interim full-system accuracy in the event of loss of the operational control segment (OCS) of GPS, an AU system needing only user segment modification has been implemented. During the summer of 1988 a demonstration of the system was conducted in Tromso, Norway. It is indicated, that in this 180-day test period, the autonomous user with a crystal reference could achieve a navigation accuracy of the same order of magnitude as when the OCS was operating. Furthermore, other navigation systems may utilize the concepts of this autonomous user system.

  19. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  20. Establishing an evoked-potential vision-tracking system

    NASA Technical Reports Server (NTRS)

    Skidmore, Trent A.

    1991-01-01

    This paper presents experimental evidence to support the feasibility of an evoked-potential vision-tracking system. The topics discussed are stimulator construction, verification of the photic driving response in the electroencephalogram, a method for performing frequency separation, and a transient-analysis example. The final issue considered is that of object multiplicity (concurrent visual stimuli with different flashing rates). The paper concludes by discussing several applications currently under investigation.

  1. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  2. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  3. A VISION of Advanced Nuclear System Cost Uncertainty

    SciTech Connect

    J'Tia Taylor; David E. Shropshire; Jacob J. Jacobson

    2008-08-01

    VISION (VerifIable fuel cycle SImulatiON) is the Advanced Fuel Cycle Initiative’s and Global Nuclear Energy Partnership Program’s nuclear fuel cycle systems code designed to simulate the US commercial reactor fleet. The code is a dynamic stock and flow model that tracks the mass of materials at the isotopic level through the entire nuclear fuel cycle. As VISION is run, it calculates the decay of 70 isotopes including uranium, plutonium, minor actinides, and fission products. VISION.ECON is a sub-model of VISION that was developed to estimate fuel cycle and reactor costs. The sub-model uses the mass flows generated by VISION for each of the fuel cycle functions (referred to as modules) and calculates the annual cost based on cost distributions provided by the Advanced Fuel Cycle Cost Basis Report1. Costs are aggregated for each fuel cycle module, and the modules are aggregated into front end, back end, recycling, reactor, and total fuel cycle costs. The software also has the capability to perform system sensitivity analysis. This capability may be used to analyze the impacts on costs due to system uncertainty effects. This paper will provide a preliminary evaluation of the cost uncertainty affects attributable to 1) key reactor and fuel cycle system parameters and 2) scheduling variations. The evaluation will focus on the uncertainty on the total cost of electricity and fuel cycle costs. First, a single light water reactor (LWR) using mixed oxide fuel is examined to ascertain the effects of simple parameter changes. Three system parameters; burnup, capacity factor and reactor power are varied from nominal cost values and the affect on the total cost of electricity is measured. These simple parameter changes are measured in more complex scenarios 2-tier systems including LWRs with mixed fuel and fast recycling reactors using transuranic fuel. Other system parameters are evaluated and results will be presented in the paper. Secondly, the uncertainty due to

  4. Regulation of autonomic nervous system in space and magnetic storms

    NASA Astrophysics Data System (ADS)

    Baevsky, R. M.; Petrov, V. M.; Chernikova, A. G.

    Variations in the earth's magnetic field and magnetic storms are known to be a risk factor for the development of cardiovascular disorders. The main ``targets'' for geomagnetic perturbations are the central nervous system and the neural regulation of vascular tone and heart rate variability. This paper presents the data about effect of geomagnetic fluctuations on human body in space. As a method for research the analysis of heart rate variability was used, which allows evaluating the state of the sympathetic and parasympathetic parts of the autonomic nervous system, vasomotor center and subcortical neural centers activity. Heart rate variability data were analyzed for 30 cosmonauts at the 2-nd day of space flight on transport spaceship Soyuz (32nd orbit). There were formed three groups of cosmonauts: without magnetic storm (n=9), on a day with magnetic storm (n=12) and 1-2 days after magnetic storm (n=9). The present study was the first to demonstrate a specific impact of geomagnetic perturbations on the system of autonomic circulatory control in cosmonauts during space flight. The increasing of highest nervous centers activity was shown for group with magnetic storms, which was more significant on 1-2 days after magnetic storm. The use of discriminate analysis allowed to classify indicated three groups with 88 % precision. Canonical variables are suggested to be used as criterions for evaluation of specific and non-specific components of cardiovascular reactions to geomagnetic perturbations. The applied aspect of the findings from the present study should be emphasized. They show, in particular, the need to supplement the medical monitoring of cosmonauts with predictions of probable geomagnetic perturbations in view of the prevention of unfavorable states appearances if the adverse reactions to geomagnetic perturbations are added to the tension experienced by regulatory systems during various stresses situations (such as work in the open space).

  5. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  6. Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    1987-01-01

    The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reasoning. The third issue is hardware that supports symbolic processing in conjunction with standard data I/O and numeric processing. A SSP that currently is seen to be technologically feasible and is being developed is described and used as a baseline in the design concept.

  7. Mission-based guidance system design for autonomous UAVs

    NASA Astrophysics Data System (ADS)

    Moon, Jongki

    The advantages of UAVs in the aviation arena have led to extensive research activities on autonomous technology of UAVs to achieve specific mission objectives. This thesis mainly focuses on the development of a mission-based guidance system. Among various missions expected for future needs, autonomous formation flight (AFF) and obstacle avoidance within safe operation limits are investigated. In the design of an adaptive guidance system for AFF, the leader information except position is assumed to be unknown to a follower. Thus, the only measured information related to the leader is the line-of-sight (LOS) range and angle. Adding an adaptive element with neural networks into the guidance system provides a capability to effectively handle leader's velocity changes. Therefore, this method can be applied to the AFF control systems that use a passive sensing method. In this thesis, an adaptive velocity command guidance system and an adaptive acceleration command guidance system are developed and presented. Since relative degrees of the LOS range and angle are different depending on the outputs from the guidance system, the architecture of the guidance system changes accordingly. Simulations and flight tests are performed using the Georgia Tech UAV helicopter, the GTMax, to evaluate the proposed guidance systems. The simulation results show that the neural network (NN) based adaptive element can improve the tracking performance by effectively compensating for the effect of unknown dynamics. It has also been shown that the combination of an adaptive velocity command guidance system and the existing GTMax autopilot controller performs better than the combination of an adaptive acceleration command guidance system and the GTMax autopilot controller. The successful flight evaluation using an adaptive velocity command guidance system clearly shows that the adaptive guidance control system is a promising solution for autonomous formation flight of UAVs. In addition, an

  8. [A Meridian Visualization System Based on Impedance and Binocular Vision].

    PubMed

    Su, Qiyan; Chen, Xin

    2015-03-01

    To ensure the meridian can be measured and displayed correctly on the human body surface, a visualization method based on impedance and binocular vision is proposed. First of all, using alternating constant current source to inject current signal into the human skin surface, then according to the low impedance characteristics of meridian, the multi-channel detecting instrument detects voltage of each pair of electrodes, thereby obtaining the channel of the meridian location, through the serial port communication, data is transmitted to the host computer. Secondly, intrinsic and extrinsic parameters of cameras are obtained by Zhang's camera calibration method, and 3D information of meridian location is got by corner selection and matching of the optical target, and then transform coordinate of 3D information according to the binocular vision principle. Finally, using curve fitting and image fusion technology realizes the meridian visualization. The test results show that the system can realize real-time detection and accurate display of meridian. PMID:26524777

  9. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  10. Processor design optimization methodology for synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.

    1997-06-01

    Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.

  11. Autonomous satellite navigation methods using the Global Positioning Satellite System

    NASA Technical Reports Server (NTRS)

    Murata, M.; Tapley, B. D.; Schutz, B. E.

    1982-01-01

    This investigation considers the problem of autonomous satellite navigation using the NAVSTAR Global Positioning System (GPS). The major topics covered include the design, implementation, and validation of onboard navigation filter algorithms by means of computer simulations. The primary errors that the navigation filter design must minimize are computational effects and modeling inaccuracies due to limited capability of the onboard computer. The minimization of the effect of these errors is attained by applying the sequential extended Kalman filter using a factored covariance implementation with Q-matrix or dynamical model compensations. Peformance evaluation of the navigation filter design is carried out using both the CDC Cyber 170/750 computer and the PDP-11/60 computer. The results are obtained assuming the Phase I GPS constellation, consisting of six satellites, and a Landsat-D type spacecraft as the model for the user satellite orbit.

  12. The Montana ALE (Autonomous Lunar Excavator) Systems Engineering Report

    NASA Technical Reports Server (NTRS)

    Hull, Bethanne J.

    2012-01-01

    On May 2 1-26, 20 12, the third annual NASA Lunabotics Mining Competition will be held at the Kennedy Space Center in Florida. This event brings together student teams from universities around the world to compete in an engineering challenge. Each team must design, build and operate a robotic excavator that can collect artificial lunar soil and deposit it at a target location. Montana State University, Bozeman, is one of the institutions selected to field a team this year. This paper will summarize the goals of MSU's lunar excavator project, known as the Autonomous Lunar Explorer (ALE), along with the engineering process that the MSU team is using to fulfill these goals, according to NASA's systems engineering guidelines.

  13. Challenges in verification and validation of autonomous systems for space exploration

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Jonsson, Ari

    2005-01-01

    Space exploration applications offer a unique opportunity for the development and deployment of autonomous systems, due to limited communications, large distances, and great expense of direct operation. At the same time, the risk and cost of space missions leads to reluctance to taking on new, complex and difficult-to-understand technology. A key issue in addressing these concerns is the validation of autonomous systems. In recent years, higher-level autonomous systems have been applied in space applications. In this presentation, we will highlight those autonomous systems, and discuss issues in validating these systems. We will then look to future demands on validating autonomous systems for space, identify promising technologies and open issues.

  14. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  15. ANTS: Exploring the Solar System with an Autonomous Nanotechnology Swarm

    NASA Technical Reports Server (NTRS)

    Clark, P. E.; Curtis, S.; Rilee, M.; Truszkowski, W.; Marr, G.

    2002-01-01

    ANTS (Autonomous Nano-Technology Swarm), a NASA advanced mission concept, calls for a large (1000 member) swarm of pico-class (1 kg) totally autonomous spacecraft to prospect the asteroid belt. Additional information is contained in the original extended abstract.

  16. Machine vision system for measuring conifer seedling morphology

    NASA Astrophysics Data System (ADS)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  17. Users' subjective evaluation of electronic vision enhancement systems.

    PubMed

    Culham, Louise E; Chabra, Anthony; Rubin, Gary S

    2009-03-01

    The aims of this study were (1) to elicit the users' responses to four electronic head-mounted devices (Jordy, Flipperport, Maxport and NuVision) and (2) to correlate users' opinion with performance. Ten patients with early onset macular disease (EOMD) and 10 with age-related macular disease (AMD) used these electronic vision enhancement systems (EVESs) for a variety of visual tasks. A questionnaire designed in-house and a modified VF-14 were used to evaluate the responses. Following initial experience of the devices in the laboratory, every patient took home two of the four devices for 1 week each. Responses were re-evaluated after this period of home loan. No single EVES stood out as the strong preference for all aspects evaluated. In the laboratory-based appraisal, Flipperport typically received the best overall ratings and highest score for image quality and ability to magnify, but after home loan there was no significant difference between devices. Comfort of device, although important, was not predictive of rating once magnification had been taken into account. For actual performance, a threshold effect was seen whereby ratings increased as reading speed improved up to 60 words per minute. Newly diagnosed patients responded most positively to EVESs, but otherwise users' opinion could not be predicted by age, gender, diagnosis or previous CCTV experience. User feedback is essential in our quest to understand the benefits and shortcoming of EVESs. Such information should help guide both prescribing and future development of low vision devices. PMID:19236583

  18. Development of a machine vision system for automotive part inspection

    NASA Astrophysics Data System (ADS)

    Andres, Nelson S.; Marimuthu, Ram P.; Eom, Yong-Kyun; Jang, Bong-Choon

    2005-12-01

    As an alternative for human inspection, presented in this study was the development of a machine vision inspection system (MVIS) purposely for car seat frames. The proposed MVIS was designed to meet the demands, features and specifications of car seat frame manufacturing companies in striving for increased throughput of better quality. This computer-based MVIS was designed to perform quality measures by detecting holes, nuts and welding spots on every car seat frame in real time and ensuring these portions are intact, precise and in proper place. In this study, the NI Vision Builder software for Automatic Inspection was used as a solution in configuring the aimed quality measurements. The proposed software has measurement techniques such as edge detecting and pattern-matching which are capable of identifying the boundaries or edges of an object and analyzing the pixel values along the profile to detect significant intensity changes. Either of these techniques is capable of gauging sizes, detecting missing portion and checking alignment of parts. The techniques for visual inspection were optimized through qualitative analysis and simulation of human tolerance on inspecting car seat frames. Furthermore, this study exemplified the incorporation of the optimized vision inspection environment to the pre-inspection and post-inspection subsystems. The optimized participation of human on this proposed MVIS for car seat frames has ideally eased to feeding and sorting.

  19. Airborne multisensor system for the autonomous detection of land mines

    NASA Astrophysics Data System (ADS)

    Scheerer, Klaus

    1997-07-01

    A concept of a modular multisensor system for use on an airborne platform is presented. THe sensor system comprises two high resolution IR sensors working in the mid and far IR spectral regions, a RGB video camera with its sensitivity extended to the near IR in connection with a laser illuminator, and a radar with a spatial resolution adapted to the expected mine sizes. The sensor concept emerged from the evaluation of comprehensive static and airborne measurements on numerous buried and unburied mines. The measurements were performed on single mines and on minefields, layed down according to military requirements. The system has an on-board realtime image processing capability and is intended to operate autonomously with a data link to a mobile groundstation. Data from a navigation unit serve to transform the location of identified mines into a geodetic coordinate system. The system will be integrated into a cylindrical structure of about 40 cm diameter. This may be a drone or simply a tube which can be mounted on any carrier whatever. The realization of a simplified demonstrator for captive flight tests is planned by 1998.

  20. The autonomy of the visual systems and the modularity of conscious vision.

    PubMed Central

    Zeki, S; Bartels, A

    1998-01-01

    Anatomical and physiological evidence shows that the primate visual brain consists of many distributed processing systems, acting in parallel. Psychophysical studies show that the activity in each of the parallel systems reaches its perceptual end-point at a different time, thus leading to a perceptual asynchrony in vision. This, together with clinical and human imaging evidence, suggests strongly that the processing systems are also perceptual systems and that the different processing-perceptual systems can act more or less autonomously. Moreover, activity in each can have a conscious correlate without necessarily involving activity in other visual systems. This leads us to conclude not only that visual consciousness is itself modular, reflecting the basic modular organization of the visual brain, but that the binding of cellular activity in the processing-perceptual systems is more properly thought of as a binding of the consciousnesses generated by each of them. It is this binding that gives us our integrated image of the visual world. PMID:9854263

  1. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  2. R-MASTIF: robotic mobile autonomous system for threat interrogation and object fetch

    NASA Astrophysics Data System (ADS)

    Das, Aveek; Thakur, Dinesh; Keller, James; Kuthirummal, Sujit; Kira, Zsolt; Pivtoraiko, Mihail

    2013-01-01

    Autonomous robotic "fetch" operation, where a robot is shown a novel object and then asked to locate it in the field, re- trieve it and bring it back to the human operator, is a challenging problem that is of interest to the military. The CANINE competition presented a forum for several research teams to tackle this challenge using state of the art in robotics technol- ogy. The SRI-UPenn team fielded a modified Segway RMP 200 robot with multiple cameras and lidars. We implemented a unique computer vision based approach for textureless colored object training and detection to robustly locate previ- ously unseen objects out to 15 meters on moderately flat terrain. We integrated SRI's state of the art Visual Odometry for GPS-denied localization on our robot platform. We also designed a unique scooping mechanism which allowed retrieval of up to basketball sized objects with a reciprocating four-bar linkage mechanism. Further, all software, including a novel target localization and exploration algorithm was developed using ROS (Robot Operating System) which is open source and well adopted by the robotics community. We present a description of the system, our key technical contributions and experimental results.

  3. Laser rangefinders for autonomous intelligent cruise control systems

    NASA Astrophysics Data System (ADS)

    Journet, Bernard A.; Bazin, Gaelle

    1998-01-01

    THe purpose of this paper is to show to what kind of application laser range-finders can be used inside Autonomous Intelligent Cruise Control systems. Even if laser systems present good performances the safety and technical considerations are very restrictive. As the system is used in the outside, the emitted average output power must respect the rather low level of 1A class. Obstacle detection or collision avoidance require a 200 meters range. Moreover bad weather conditions, like rain or fog, ar disastrous. We have conducted measurements on laser rangefinder using different targets and at different distances. We can infer that except for cooperative targets low power laser rangefinder are not powerful enough for long distance measurement. Radars, like 77 GHz systems, are better adapted to such cases. But in case of short distances measurement, range around 10 meters, with a minimum distance around twenty centimeters, laser rangefinders are really useful with good resolution and rather low cost. Applications can have the following of white lines on the road, the target being easily cooperative, detection of vehicles in the vicinity, that means car convoy traffic control or parking assistance, the target surface being indifferent at short distances.

  4. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  5. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Hugli, Heinz

    2000-03-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits to accommodate a broad range of inspection requirements, is however limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented, considering the particular case of the visual inspection of markings found on top of molded integrated circuits. These results show the effectiveness of the presented objective functions and search methods, and validate the configuration assistant as well.

  6. Research on machine vision system of monitoring injection molding processing

    NASA Astrophysics Data System (ADS)

    Bai, Fan; Zheng, Huifeng; Wang, Yuebing; Wang, Cheng; Liao, Si'an

    2016-01-01

    With the wide development of injection molding process, the embedded monitoring system based on machine vision has been developed to automatically monitoring abnormality of injection molding processing. First, the construction of hardware system and embedded software system were designed. Then camera calibration was carried on to establish the accurate model of the camera to correct distortion. Next the segmentation algorithm was applied to extract the monitored objects of the injection molding process system. The realization procedure of system included the initialization, process monitoring and product detail detection. Finally the experiment results were analyzed including the detection rate of kinds of the abnormality. The system could realize the multi-zone monitoring and product detail detection of injection molding process with high accuracy and good stability.

  7. Autonomous perception and decision making in cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Sarkar, Soumik

    2011-07-01

    The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and

  8. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem

  9. Lagrangian-Hamiltonian unified formalism for autonomous higher order dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2011-09-01

    The Lagrangian-Hamiltonian unified formalism of Skinner and Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, as well as for first-order and higher order field theories. However, a complete generalization to higher order mechanical systems is yet to be described. In this work, after reviewing the natural geometrical setting and the Lagrangian and Hamiltonian formalisms for higher order autonomous mechanical systems, we develop a complete generalization of the Lagrangian-Hamiltonian unified formalism for these kinds of systems, and we use it to analyze some physical models from this new point of view.

  10. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a meeting of Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight...

  11. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  12. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  13. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    SciTech Connect

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  14. An Approach to Autonomous Control for Space Nuclear Power Systems

    SciTech Connect

    Wood, Richard Thomas; Upadhyaya, Belle R.

    2011-01-01

    Under Project Prometheus, the National Aeronautics and Space Administration (NASA) investigated deep space missions that would utilize space nuclear power systems (SNPSs) to provide energy for propulsion and spacecraft power. The initial study involved the Jupiter Icy Moons Orbiter (JIMO), which was proposed to conduct in-depth studies of three Jovian moons. Current radioisotope thermoelectric generator (RTG) and solar power systems cannot meet expected mission power demands, which include propulsion, scientific instrument packages, and communications. Historically, RTGs have provided long-lived, highly reliable, low-power-level systems. Solar power systems can provide much greater levels of power, but power density levels decrease dramatically at {approx} 1.5 astronomical units (AU) and beyond. Alternatively, an SNPS can supply high-sustained power for space applications that is both reliable and mass efficient. Terrestrial nuclear reactors employ varying degrees of human control and decision-making for operations and benefit from periodic human interaction for maintenance. In contrast, the control system of an SNPS must be able to provide continuous operatio for the mission duration with limited immediate human interaction and no opportunity for hardware maintenance or sensor calibration. In effect, the SNPS control system must be able to independently operate the power plant while maintaining power production even when subject to off-normal events and component failure. This capability is critical because it will not be possible to rely upon continuous, immediate human interaction for control due to communications delays and periods of planetary occlusion. In addition, uncertainties, rare events, and component degradation combine with the aforementioned inaccessibility and unattended operation to pose unique challenges that an SNPS control system must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design.

  15. DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems

    SciTech Connect

    Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.; Fink, Glenn A.; Bakken, David E.

    2011-02-01

    For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing the autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.

  16. Triangulation-Based Camera Calibration For Machine Vision Systems

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic A.; Celenk, Mehmet

    1990-04-01

    This paper describes a camera calibration procedure for stereo-based machine vision systems. The method is based on geometric triangulation using only a single image of three distinctive points. Both the intrinsic and extrinsic parameters of the system are determined. The procedure is performed only once at the initial set-up using a simple camera model. The effective focal length is extended in such a way that a linear transformation exists between the camera image plane and the output digital image. Only three world points are needed to find the extended focal length and the transformation matrix elements that relates the camera position and orientation to a real world coordinate system. The parameters of the system are computed by solving a set of linear equations. Experimental results show that the method, when used in a stereo system developed in this research, produces reasonably accurate 3-D measurements.

  17. Beam Splitter For Welding-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.

    1991-01-01

    Compact welding torch equipped with along-the-torch vision system includes cubic beam splitter to direct preview light on weldment and to reflect light coming from welding scene for imaging. Beam splitter integral with torch; requires no external mounting brackets. Rugged and withstands vibrations and wide range of temperatures. Commercially available, reasonably priced, comes in variety of sizes and optical qualities with antireflection and interference-filter coatings on desired faces. Can provide 50 percent transmission and 50 percent reflection of incident light to exhibit minimal ghosting of image.

  18. Scratch measurement system using machine vision: part II

    NASA Astrophysics Data System (ADS)

    Sarr, Dennis P.

    1992-03-01

    Aircraft skins and windows must not have scratches, which are unacceptable for cosmetic and structural reasons. Manual methods are inadequate in giving accurate reading and do not provide a hardcopy report. A prototype scratch measurement system (SMS) using computer vision and image analysis has been developed. This paper discusses the prototype description, novel ideas, improvements, repeatability, reproducibility, accuracy, and the calibration method. Boeing's Calibration Certification Laboratory has given the prototype a qualified certification. The SMS is portable for usage in factory or aircraft hangars anywhere in the world.

  19. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  20. Lost among the trees? The autonomic nervous system and paediatrics.

    PubMed

    Rees, Corinne A

    2014-06-01

    The autonomic nervous system (ANS) has been strikingly neglected in Western medicine. Despite its profound importance for regulation, adjustment and coordination of body systems, it lacks priority in training and practice and receives scant attention in numerous major textbooks. The ANS is integral to manifestations of illness, underlying familiar physical and psychological symptoms. When ANS activity is itself dysfunctional, usual indicators of acute illness may prove deceptive. Recognising the relevance of the ANS can involve seeing the familiar through fresh eyes, challenging assumptions in clinical assessment and in approaches to practice. Its importance extends from physical and psychological well-being to parenting and safeguarding, public services and the functioning of society. Exploration of its role in conditions ranging from neurological, gastrointestinal and connective tissue disorders, diabetes and chronic fatigue syndrome, to autism, behavioural and mental health difficulties may open therapeutic avenues. The ANS offers a mechanism for so-called functional illnesses and illustrates the importance of recognising that 'stress' takes many forms, physical, psychological and environmental, desirable and otherwise. Evidence of intrauterine and post-natal programming of ANS reactivity suggests that neonatal care and safeguarding practice may offer preventive opportunity, as may greater understanding of epigenetic change of ANS activity through, for example, accidental or psychological trauma or infection. The aim of this article is to accelerate recognition of the importance of the ANS throughout paediatrics, and of the potential physical and psychological cost of neglecting it. PMID:24573884

  1. Autonomous mine detection system (AMDS) neutralization payload module

    NASA Astrophysics Data System (ADS)

    Majerus, M.; Vanaman, R.; Wright, N.

    2010-04-01

    The Autonomous Mine Detection System (AMDS) program is developing a landmine and explosive hazards standoff detection, marking, and neutralization system for dismounted soldiers. The AMDS Capabilities Development Document (CDD) has identified the requirement to deploy three payload modules for small robotic platforms: mine detection and marking, explosives detection and marking, and neutralization. This paper addresses the neutralization payload module. There are a number of challenges that must be overcome for the neutralization payload module to be successfully integrated into AMDS. The neutralizer must meet stringent size, weight, and power (SWaP) requirements to be compatible with a small robot. The neutralizer must be effective against a broad threat, to include metal and plastic-cased Anti-Personnel (AP) and Anti-Tank (AT) landmines, explosive devices, and Unexploded Explosive Ordnance (UXO.) It must adapt to a variety of threat concealments, overburdens, and emplacement methods, to include soil, gravel, asphalt, and concrete. A unique neutralization technology is being investigated for adaptation to the AMDS Neutralization Module. This paper will describe review this technology and how the other two payload modules influence its design for minimizing SWaP. Recent modeling and experimental efforts will be included.

  2. Systems and methods for autonomously controlling agricultural machinery

    DOEpatents

    Hoskinson, Reed L.; Bingham, Dennis N.; Svoboda, John M.; Hess, J. Richard

    2003-07-08

    Systems and methods for autonomously controlling agricultural machinery such as a grain combine. The operation components of a combine that function to harvest the grain have characteristics that are measured by sensors. For example, the combine speed, the fan speed, and the like can be measured. An important sensor is the grain loss sensor, which may be used to quantify the amount of grain expelled out of the combine. The grain loss sensor utilizes the fluorescence properties of the grain kernels and the plant residue to identify when the expelled plant material contains grain kernels. The sensor data, in combination with historical and current data stored in a database, is used to identify optimum operating conditions that will result in increased crop yield. After the optimum operating conditions are identified, an on-board computer can generate control signals that will adjust the operation of the components identified in the optimum operating conditions. The changes result in less grain loss and improved grain yield. Also, because new data is continually generated by the sensor, the system has the ability to continually learn such that the efficiency of the agricultural machinery is continually improved.

  3. Central Command Architecture for High Order Autonomous Unmanned Systems

    NASA Astrophysics Data System (ADS)

    Bieber, Chad Michael

    This dissertation describes a High-Order Central Command (HOCC) architecture and presents a flight demonstration where a single user coordinates 4 unmanned fixed-wing aircraft. HOCC decouples the user from control of individual vehicles, eliminating human limits on the size of the system, and uses a non-iterative sequence of algorithms that permit easy estimation of how computational complexity scales. The Hungarian algorithm used to solve a min-sum assignment with a one-task planning horizon becomes the limiting complexity, scaling at O(x3) where x is the larger number of vehicles or tasks in the assignment. This method is shown to have a unique property of creating non-intersecting routes which is used to drastically reduce the computational cost of deconflicting planned routes. Results from several demonstration flights are presented where a single user commands a system of 4 fixed-wing aircraft. The results confirm that autonomous flight of a large number of UAVs is a bona fide engineering sub-discipline, which is expected to be of interest to engineers who will find its utility in the aviation industry and in other emerging markets.

  4. Range Safety for an Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Lanzi, Raymond J.; Simpson, James C.

    2010-01-01

    The Range Safety Algorithm software encapsulates the various constructs and algorithms required to accomplish Time Space Position Information (TSPI) data management from multiple tracking sources, autonomous mission mode detection and management, and flight-termination mission rule evaluation. The software evaluates various user-configurable rule sets that govern the qualification of TSPI data sources, provides a prelaunch autonomous hold-launch function, performs the flight-monitoring-and-termination functions, and performs end-of-mission safing

  5. Robotic 3D vision solder joint verification system evaluation

    SciTech Connect

    Trent, M.A.

    1992-02-01

    A comparative performance evaluation was conducted between a proprietary inspection system using intelligent 3D vision and manual visual inspection of solder joints. The purpose was to assess the compatibility and correlation of the automated system with current visual inspection criteria. The results indicated that the automated system was more accurate (> 90%) than visual inspection (60--70%) in locating and/or categorizing solder joint defects. In addition, the automated system can offer significant capabilities to characterize and monitor a soldering process by measuring physical attributes, such as solder joint volumes and wetting angles, which are not available through manual visual inspection. A more in-depth evaluation of this technology is recommended.

  6. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  7. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    PubMed

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  8. Computer vision system for three-dimensional inspection

    NASA Astrophysics Data System (ADS)

    Penafiel, Francisco; Fernandez, Luis; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    In the manufacturing process certain workpieces are inspected for dimensional measurement using sophisticated quality control techniques. During the operation phase, these parts are deformed due to the high temperatures involved in the process. The evolution of the workpieces structure is noticed on their dimensional modification. This evolution can be measured with a set of dimensional parameters. In this paper, a three dimensional automatic inspection of these parts is proposed. The aim is the measuring of some workpieces features through 3D control methods using directional lighting and a computer artificial vision system. The results of this measuring must be compared with the parameters obtained after the manufacturing process in order to determine the degree of deformation of the workpiece and decide whether it is still usable or not. Workpieces outside a predetermined specification range must be discarded and replaced by new ones. The advantage of artificial vision methods is based on the fact that there is no need to get in touch with the object to inspect. This makes feasible its use in hazardous environments, not suitable for human beings. A system has been developed and applied to the inspection of fuel assemblies in nuclear power plants. Such a system has been implemented in a very high level of radiation environment and operates in underwater conditions. The physical dimensions of a nuclear fuel assembly are modified after its operation in a nuclear power plant in relation to the original dimensions after its manufacturing. The whole system (camera, mechanical and illumination systems and the radioactive fuel assembly) is submerged in water for minimizing radiation effects and is remotely controlled by human intervention. The developed system has to inspect accurately a set of measures on the fuel assembly surface such as length, twists, arching, etc. The present project called SICOM (nuclear fuel assembly inspection system) is included into the R

  9. Autonomic nervous system pulmonary vasoregulation after hypoperfusion in conscious dogs.

    PubMed

    Clougherty, P W; Nyhan, D P; Chen, B B; Goll, H M; Murray, P A

    1988-05-01

    We investigated the role of the autonomic nervous system (ANS) in the pulmonary vascular response to increasing cardiac index after a period of hypoperfusion (defined as reperfusion) in conscious dogs. Base-line and reperfusion pulmonary vascular pressure-cardiac index (P/Q) plots were generated by stepwise constriction and release, respectively, of an inferior vena caval occluder to vary Q. Surprisingly, after 10-15 min of hypoperfusion (Q decreased from 139 +/- 9 to 46 +/- 3 ml.min-1.kg-1), the pulmonary vascular pressure gradient (pulmonary arterial pressure-pulmonary capillary wedge pressure) was unchanged over a broad range of Q during reperfusion compared with base line when the ANS was intact. In contrast, pulmonary vasoconstriction was observed during reperfusion after combined sympathetic beta-adrenergic and cholinergic receptor block, after beta-block alone, but not after cholinergic block alone. The pulmonary vasoconstriction during reperfusion was entirely abolished by combined sympathetic alpha- and beta-block. Although sympathetic alpha-block alone caused pulmonary vasodilation compared with the intact, base-line P/Q relationship, no further vasodilation was observed during reperfusion. Thus the ANS actively regulates the pulmonary circulation during reperfusion in conscious dogs. With the ANS intact, sympathetic beta-adrenergic vasodilation offsets alpha-adrenergic vasoconstriction and prevents pulmonary vasoconstriction during reperfusion. PMID:2896465

  10. Advancements in design of an autonomous satellite docking system

    NASA Astrophysics Data System (ADS)

    Hays, Anthony B.; Tchoryk, Peter, Jr.; Pavlich, Jane C.; Ritter, Greg A.; Wassick, Gregory J.

    2004-08-01

    The past five years has witnessed a significant increase in the attention given to on-orbit satellite docking and servicing. Recent world events have proven how we have come to rely on our space assets, especially during times of crisis. It has become abundantly clear that the ability to autonomously rendezvous, dock, inspect and service both military and civilian assets is no longer a nicety, but a necessity. Reconnaissance and communications satellites, even the space shuttle and International Space Station, could benefit from this capability. Michigan Aerospace Corporation, with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), has been refining a compact, light, compliant soft-docking system. Earlier prototypes have been tested on the Marshall Space Flight Center (MSFC) flat-floor as well as on the Johnson Space Flight Center (JSC) KC-135 micro-gravity aircraft. Over the past year, refinements have been made to the mechanism based on the lessons learned from these tests. This paper discusses the optimal design that has resulted.

  11. Computer system evolution requirements for autonomous checkout of exploration vehicles

    NASA Technical Reports Server (NTRS)

    Davis, Tom; Sklar, Mike

    1991-01-01

    This study, now in its third year, has had the overall objective and challenge of determining the needed hooks and scars in the initial Space Station Freedom (SSF) system to assure that on-orbit assembly and refurbishment of lunar and Mars spacecraft can be accomplished with the maximum use of automation. In this study automation is all encompassing and includes physical tasks such as parts mating, tool operation, and human visual inspection, as well as non-physical tasks such as monitoring and diagnosis, planning and scheduling, and autonomous visual inspection. Potential tasks for automation include both extravehicular activity (EVA) and intravehicular activity (IVA) events. A number of specific techniques and tools have been developed to determine the ideal tasks to be automated, and the resulting timelines, changes in labor requirements and resources required. The Mars/Phobos exploratory mission developed in FY89, and the Lunar Assembly/Refurbishment mission developed in FY90 and depicted in the 90 Day Study as Option 5, have been analyzed in detailed in recent years. The complete methodology and results are presented in FY89 and FY90 final reports.

  12. Fuzzy Logic Based Autonomous Parallel Parking System with Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Panomruttanarug, Benjamas; Higuchi, Kohji

    This paper presents an emulation of fuzzy logic control schemes for an autonomous parallel parking system in a backward maneuver. There are four infrared sensors sending the distance data to a microcontroller for generating an obstacle-free parking path. Two of them mounted on the front and rear wheels on the parking side are used as the inputs to the fuzzy rules to calculate a proper steering angle while backing. The other two attached to the front and rear ends serve for avoiding collision with other cars along the parking space. At the end of parking processes, the vehicle will be in line with other parked cars and positioned in the middle of the free space. Fuzzy rules are designed based upon a wall following process. Performance of the infrared sensors is improved using Kalman filtering. The design method needs extra information from ultrasonic sensors. Starting from modeling the ultrasonic sensor in 1-D state space forms, one makes use of the infrared sensor as a measurement to update the predicted values. Experimental results demonstrate the effectiveness of sensor improvement.

  13. Abnormally Malicious Autonomous Systems and their Internet Connectivity

    SciTech Connect

    Shue, Craig A; Kalafut, Prof. Andrew; Gupta, Prof. Minaxi

    2011-01-01

    While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using ten popular blacklists, plus local spam data, and extensive DNS resolutions based on the contents of the blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted. Yet others account for large fractions of blacklisted IP addresses. Several ASes regularly peer with ASes associated with significant malicious activity. We also find that malicious ASes as a whole differ from benign ones in other properties not obviously related to their malicious activities, such as more frequent connectivity changes with their BGP peers. Overall, we conclude that examining malicious activity at AS granularity can unearth networks with lax security or those that harbor cybercrime.

  14. Rapid laser prototyping of valves for microfluidic autonomous systems

    NASA Astrophysics Data System (ADS)

    Mohammed, M. I.; Abraham, E.; Y Desmulliez, M. P.

    2013-03-01

    Capillary forces in microfluidics provide a simple yet elegant means to direct liquids through flow channel networks. The ability to manipulate the flow in a truly automated manner has proven more problematic. The majority of valves require some form of flow control devices, which are manually, mechanically or electrically driven. Most demonstrated capillary systems have been manufactured by photolithography, which, despite its high precision and repeatability, can be labour intensive, requires a clean room environment and the use of fixed photomasks, limiting thereby the agility of the manufacturing process to readily examine alternative designs. In this paper, we describe a robust and rapid CO2 laser manufacturing process and demonstrate a range of capillary-driven microfluidic valve structures embedded within a microfluidic network. The manufacturing process described allows for advanced control and manipulation of fluids such that flow can be halted, triggered and delayed based on simple geometrical alterations to a given microchannel. The rapid prototyping methodology has been employed with PMMA substrates and a complete device has been created, ready for use, within 2-3 h. We believe that this agile manufacturing process can be applied to produce a range of complex autonomous fluidic platforms and allows subsequent designs to be rapidly explored.

  15. Update on laser vision correction using wavefront analysis with the CustomCornea system and LADARVision 193-nm excimer laser

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.

    2002-06-01

    A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.

  16. Machine vision system for the control of tunnel boring machines

    NASA Astrophysics Data System (ADS)

    Habacher, Michael; O'Leary, Paul; Harker, Matthew; Golser, Johannes

    2013-03-01

    This paper presents a machine vision system for the control of dual-shield Tunnel Boring Machines. The system consists of a camera with ultra bright LED illumination and a target system consisting of multiple retro-reflectors. The camera mounted on the gripper shield measures the relative position and orientation of the target which is mounted on the cutting shield. In this manner the position of the cutting shield relative to the gripper shield is determined. Morphological operators are used to detect the retro-reflectors in the image and a covariance optimized circle fit is used to determine the center point of each reflector. A graph matching algorithm is used to ensure a robust matching of the constellation of the observed target with the ideal target geometry.

  17. Autonomic Modulation by Electrical Stimulation of the Parasympathetic Nervous System: An Emerging Intervention for Cardiovascular Diseases.

    PubMed

    He, Bo; Lu, Zhibing; He, Wenbo; Huang, Bing; Jiang, Hong

    2016-06-01

    The cardiac autonomic nervous system has been known to play an important role in the development and progression of cardiovascular diseases. Autonomic modulation by electrical stimulation of the parasympathetic nervous system, which increases the parasympathetic activity and suppresses the sympathetic activity, is emerging as a therapeutic strategy for the treatment of cardiovascular diseases. Here, we review the recent literature on autonomic modulation by electrical stimulation of the parasympathetic nervous system, including vagus nerve stimulation, transcutaneous auricular vagal stimulation, spinal cord stimulation, and ganglionated plexi stimulation, in the treatment of heart failure, atrial fibrillation, and ventricular arrhythmias. PMID:26914959

  18. Bio-inspired vision

    NASA Astrophysics Data System (ADS)

    Posch, C.

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980`s, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ``neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  19. Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System

    NASA Astrophysics Data System (ADS)

    Isik, Can

    An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the

  20. Autonomous reconfigurable GPS/INS navigation and pointing system for rendezvous and docking

    NASA Technical Reports Server (NTRS)

    Upadhyay, T. N.; Cotterill, S.; Deaton, A. W.

    1992-01-01

    This paper describes the development of an autonomous integrated spacecraft navigation system which provides multiple modes of navigation, including relative and absolute navigation. The system provides attitude information from GPS or INS, or by tightly integrating the two systems. Interferometric GPS techniques are used when multiple antennas and integrated Doppler measurements are available. An important aspect of this research is the autonomously reconfigurable Kalman filter, controlled by an embedded knowledge base, designed to respond to component degradation and changes in mission goals.

  1. [Physiopathology of the autonomic nervous system activity during sleep].

    PubMed

    Bersano, C; Revera, M; Vanoli, E

    2001-08-01

    Sleep consists of two phases that periodically alternate: the rapid eye movement (REM) phase and the non-REM phase. The non-REM stage is characterized by wide synchronous waves in the electroencephalogram, by a low heart rate and by a decrease in arterial blood pressure and peripheral resistances. This hemodynamic setting is the consequence of the autonomic balance characterized by high vagal activity and low sympathetic activity. Such an autonomic condition is adequately described by the spectral analysis of heart rate variability documenting a prevalence in the high frequency band (the respiratory vagal band). The REM stage of sleep is characterized by asynchronous waves in the electroencephalogram and it is associated with a further increase in the vagal dominance of the autonomic balance resulting in a lower heart rate and decreased peripheral resistances. The REM phase of sleep is, however, also characterized by hemodynamic instability due to sudden bursts of sympathetic activity, associated with the rapid eye movements. These sympathetic bursts cause sudden changes in heart rate and peripheral resistance and may influence cardiac electrical stability both at the atrial and ventricular levels. Additionally, REM sleep may enhance the risk of anginal attacks in coronary artery disease patients. Analysis of the autonomic balance during the different phases of sleep may also help in the identification of autonomic derangements typically associated with myocardial infarction. PMID:11582715

  2. Autonomous Agents: The Origins and Co-Evolution of Reproducing Molecular Systems

    NASA Technical Reports Server (NTRS)

    Kauffman, Stuart

    1999-01-01

    The central aim of this award concerned an investigation into, and adequate formulation of, the concept of an "autonomous agent." If we consider a bacterium swimming upstream in a glucose gradient, we are willing to say of the bacterium that it is going to get food. That is, we are willing, and do, describe the bacterium as acting on its own behalf in an environment. All free living cells are, in this sense, autonomous agents. But the bacterium is "just" a set of molecules. We define an autonomous agent as a physical system able to act on its own behalf in an environment, then ask, "What must a physical system be to be an autonomous agent?" The tentative definition for a molecular autonomous agent is that it must be self-reproducing and carry out at least one thermodynamic work cycle. The work carried out in this grant involved, among other features, the development of a detailed model of a molecular autonomous agent, and study of the kinetics of this system. In particular, a molecular autonomous agent must, by the above tentative definition, not only reproduce, but must carry out at least one work cycle. I took, as a simple example of a self-reproducing molecular system, the single-stranded DNA hexamer 3'CCGCGG5' which can line up and ligate its two complementary trimers, 5'CCG3' and 5'CGG3'. But the two ligated trimers constitute the same molecular sequence in the 3' to 5' direction as the initial hexamer, hence this system is autocatalytic. On the other hand the above system is not yet an autonomous agent. At the minimum, autonomous agents, as I have defined them, are a new class of chemical reaction network. At a maximum, they may constitute a proper definition of life itself.

  3. Active Vision in Marmosets: A Model System for Visual Neuroscience

    PubMed Central

    Reynolds, John H.; Miller, Cory T.

    2014-01-01

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms. PMID:24453311

  4. Using Gnu C to develop PC-based vision systems

    NASA Astrophysics Data System (ADS)

    Miller, John W. V.; Shridhar, Malayappan; Shabestari, Behrouz N.

    1995-10-01

    The Gnu project has provided a substantial quantity of free high-quality software tools for UNIX-based machines including the Gnu C compiler which is used on a wide variety of hardware systems including IBM PC-compatible machines using 80386 or newer (32-bit) processors. While this compiler was developed for UNIX applications, it has been successfully ported to DOS and offers substantial benefits over traditional DOS-based 16-bit compilers for machine vision applications. One of the most significant advantages with Gnu C is the removal of the 640 K limit since addressing is performed with 32-bit pointers. Hence, all physical memory can be used directly to store and retrieve images, lookup tables, databases, etc. Execution speed is generally faster also since 32-bit code usually executes faster and there are no far pointers. Protected-mode operation provides other benefits since errant pointers often cause segmentation errors and the source of such errors can be readily identified using special tools provided with the compiler. Examples of vision applications using Gnu C include automatic hand-written address block recognition, counting of shattered-glass particles, and dimensional analysis.

  5. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  6. Role of the Autonomic Nervous System in Atrial Fibrillation: Pathophysiology and Therapy

    PubMed Central

    Chen, Peng-Sheng; Chen, Lan S.; Fishbein, Michael C.; Lin, Shien-Fong; Nattel, Stanley

    2014-01-01

    Autonomic nervous system activation can induce significant and heterogeneous changes of atrial electrophysiology and induce atrial tachyarrhythmias, including atrial tachycardia (AT) and atrial fibrillation (AF). The importance of the autonomic nervous system in atrial arrhythmogenesis is also supported by circadian variation in the incidence of symptomatic AF in humans. Methods that reduce autonomic innervation or outflow have been shown to reduce the incidence of spontaneous or induced atrial arrhythmias, suggesting that neuromodulation may be helpful in controlling AF. In this review we focus on the relationship between the autonomic nervous system and the pathophysiology of AF, and the potential benefit and limitations of neuromodulation in the management of this arrhythmia. We conclude that autonomic nerve activity plays an important role in the initiation and maintenance of AF, and modulating autonomic nerve function may contribute to AF control. Potential therapeutic applications include ganglionated plexus ablation, renal sympathetic denervation, cervical vagal nerve stimulation, baroreflex stimulation, cutaneous stimulation, novel drug approaches and biological therapies. While the role of the autonomic nervous system has long been recognized, new science and new technologies promise exciting prospects for the future. PMID:24763467

  7. A database/knowledge structure for a robotics vision system

    NASA Technical Reports Server (NTRS)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  8. Wearable design issues for electronic vision enhancement systems

    NASA Astrophysics Data System (ADS)

    Dvorak, Joe

    2006-09-01

    As the baby boomer generation ages, visual impairment will overtake a significant portion of the US population. At the same time, more and more of our world is becoming digital. These two trends, coupled with the continuing advances in digital electronics, argue for a rethinking in the design of aids for the visually impaired. This paper discusses design issues for electronic vision enhancement systems (EVES) [R.C. Peterson, J.S. Wolffsohn, M. Rubinstein, et al., Am. J. Ophthalmol. 136 1129 (2003)] that will facilitate their wearability and continuous use. We briefly discuss the factors affecting a person's acceptance of wearable devices. We define the concept of operational inertia which plays an important role in our design of wearable devices and systems. We then discuss how design principles based upon operational inertia can be applied to the design of EVES.

  9. Rapid Onboard Trajectory Design for Autonomous Spacecraft in Multibody Systems

    NASA Astrophysics Data System (ADS)

    Trumbauer, Eric Michael

    This research develops automated, on-board trajectory planning algorithms in order to support current and new mission concepts. These include orbiter missions to Phobos or Deimos, Outer Planet Moon orbiters, and robotic and crewed missions to small bodies. The challenges stem from the limited on-board computing resources which restrict full trajectory optimization with guaranteed convergence in complex dynamical environments. The approach taken consists of leveraging pre-mission computations to create a large database of pre-computed orbits and arcs. Such a database is used to generate a discrete representation of the dynamics in the form of a directed graph, which acts to index these arcs. This allows the use of graph search algorithms on-board in order to provide good approximate solutions to the path planning problem. Coupled with robust differential correction and optimization techniques, this enables the determination of an efficient path between any boundary conditions with very little time and computing effort. Furthermore, the optimization methods developed here based on sequential convex programming are shown to have provable convergence properties, as well as generating feasible major iterates in case of a system interrupt -- a key requirement for on-board application. The outcome of this project is thus the development of an algorithmic framework which allows the deployment of this approach in a variety of specific mission contexts. Test cases related to missions of interest to NASA and JPL such as a Phobos orbiter and a Near Earth Asteroid interceptor are demonstrated, including the results of an implementation on the RAD750 flight processor. This method fills a gap in the toolbox being developed to create fully autonomous space exploration systems.

  10. Perinatally Influenced Autonomic System Fluctuations Drive Infant Vocal Sequences.

    PubMed

    Zhang, Yisi S; Ghazanfar, Asif A

    2016-05-23

    The variable vocal behavior of human infants is the scaffolding upon which speech and social interactions develop. It is important to know what factors drive this developmentally critical behavioral output. Using marmoset monkeys as a model system, we first addressed whether the initial conditions for vocal output and its sequential structure are perinatally influenced. Using dizygotic twins and Markov analyses of their vocal sequences, we found that in the first postnatal week, twins had more similar vocal sequences to each other than to their non-twin siblings. Moreover, both twins and their siblings had more vocal sequence similarity with each other than with non-sibling infants. Using electromyography, we then investigated the physiological basis of vocal sequence structure by measuring respiration and arousal levels (via changes in heart rate). We tested the hypothesis that early-life influences on vocal output are via fluctuations of the autonomic nervous system (ANS) mediated by vocal biomechanics. We found that arousal levels fluctuate at ∼0.1 Hz (the Mayer wave) and that this slow oscillation modulates the amplitude of the faster, ∼1.0 Hz respiratory rhythm. The systematic changes in respiratory amplitude result in the different vocalizations that comprise infant vocal sequences. Among twins, the temporal structure of arousal level changes was similar and therefore indicates why their vocal sequences were similar. Our study shows that vocal sequences are tightly linked to respiratory patterns that are modulated by ANS fluctuations and that the temporal structure of ANS fluctuations is perinatally influenced. PMID:27068420

  11. The modeling of portable 3D vision coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Liu, Shugui; Huang, Fengshan; Peng, Kai

    2005-02-01

    The portable three-dimensional vision coordinate measuring system, which consists of a light pen, a CCD camera and a laptop computer, can be widely applied in most coordinate measuring fields especially on the industrial spots. On the light pen there are at least three point-shaped light sources (LEDs) acting as the measured control characteristic points and a touch trigger probe with a spherical stylus which is used to contact the point to be measured. The most important character of this system is that three light sources and the probe stylus are aligned in one line with known positions. In building and studying this measuring system, how to construct the system"s mathematical model is the most key problem called perspective of three-collinear-points problem, which is a particular case of perspective of three-points problem (P3P). On the basis of P3P and spatial analytical geometry theory, the system"s mathematical model is established in this paper. What"s more, it is verified that perspective of three-collinear-points problem has a unique solution. And the analytical equations of the measured point"s coordinates are derived by using the system"s mathematical model and the restrict condition that three light sources and the probe stylus are aligned in one line. Finally, the effectiveness of the mathematical model is confirmed by experiments.

  12. A feedback-trained autonomous control system for heterogeneous search and rescue applications

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2012-06-01

    Due to the environment in which operation occurs, earch and rescue (SAR) applications present a challenge to autonomous systems. A control technique for a heterogeneous multi-robot group is discussed. The proposed methodology is not fully autonomous; however, human operators are freed from most control tasks and allowed to focus on perception tasks while robots execute a collaborative search and identification plan. Robotic control combines a centralized dispatch and learning system (which continuously refines heuristics used for planning) with local autonomous task ordering (based on existing task priority and proximity and local conditions). This technique was tested in a SAR analogous (from a control perspective) environment.

  13. neu-VISION: an explosives detection system for transportation security

    NASA Astrophysics Data System (ADS)

    Warman, Kieffer; Penn, David

    2008-04-01

    Terrorists were targeting commercial airliners long before the 9/11 attacks on the World Trade Center and the Pentagon. Despite heightened security measures, commercial airliners remain an attractive target for terrorists, as evidenced by the August 2006 terrorist plot to destroy as many as ten aircraft in mid-flight from the United Kingdom to the United States. As a response to the security threat air carriers are now required to screen 100-percent of all checked baggage for explosives. The scale of this task is enormous and the Transportation Security Administration has deployed thousands of detection systems. Although this has resulted in improved security, the performance of the installed systems is not ideal. Further improvements are needed and can only be made with new technologies that ensure a flexible Concept of Operations and provide superior detection along with low false alarm rates and excellent dependability. To address security needs Applied Signal Technology, Inc. is developing an innovative and practical solution to meet the performance demands of aviation security. The neu-VISION TM system is expected to provide explosives detection performance for checked baggage that both complements and surpasses currently deployed performance. The neu-VISION TM system leverages a 5 year R&D program developing the Associated Particle Imaging (API) technique; a neutron based non-intrusive material identification and imaging technique. The superior performance afforded by this neutron interrogation technique delivers false alarm rates much lower than deployed technologies and "sees through" dense, heavy materials. Small quantities of explosive material are identified even in the cluttered environments.

  14. Space station automation study. Volume I. Executive summary. Autonomous systems and assembly. Final report

    SciTech Connect

    Not Available

    1984-11-01

    The purpose of the Space Station Automation Study (SSAS) was to develop informed technical guidance for NASA personnel in the use of autonomy and autonomous systems to implement Space Station functions.

  15. Space station automation study. Volume 1: Executive summary. Autonomous systems and assembly

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The purpose of the Space Station Automation Study (SSAS) was to develop informed technical guidance for NASA personnel in the use of autonomy and autonomous systems to implement space station functions.

  16. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  17. An autonomous surveillance system for blind sources localization and separation

    NASA Astrophysics Data System (ADS)

    Wu, Sean; Kulkarni, Raghavendra; Duraiswamy, Srikanth

    2013-05-01

    This paper aims at developing a new technology that will enable one to conduct an autonomous and silent surveillance to monitor sound sources stationary or moving in 3D space and a blind separation of target acoustic signals. The underlying principle of this technology is a hybrid approach that uses: 1) passive sonic detection and ranging method that consists of iterative triangulation and redundant checking to locate the Cartesian coordinates of arbitrary sound sources in 3D space, 2) advanced signal processing to sanitizing the measured data and enhance signal to noise ratio, and 3) short-time source localization and separation to extract the target acoustic signals from the directly measured mixed ones. A prototype based on this technology has been developed and its hardware includes six B and K 1/4-in condenser microphones, Type 4935, two 4-channel data acquisition units, Type NI-9234, with a maximum sampling rate of 51.2kS/s per channel, one NI-cDAQ 9174 chassis, a thermometer to measure the air temperature, a camera to view the relative positions of located sources, and a laptop to control data acquisition and post processing. Test results for locating arbitrary sound sources emitting continuous, random, impulsive, and transient signals, and blind separation of signals in various non-ideal environments is presented. This system is invisible to any anti-surveillance device since it uses the acoustic signal emitted by a target source. It can be mounted on a robot or an unmanned vehicle to perform various covert operations, including intelligence gathering in an open or a confined field, or to carry out the rescue mission to search people trapped inside ruins or buried under wreckages.

  18. A stochastic perturbation theory for non-autonomous systems

    SciTech Connect

    Moon, W.; Wettlaufer, J. S.

    2013-12-15

    We develop a perturbation theory for a class of first order nonlinear non-autonomous stochastic ordinary differential equations that arise in climate physics. The perturbative procedure produces moments in terms of integral delay equations, whose order by order decay is characterized in a Floquet-like sense. Both additive and multiplicative sources of noise are discussed and the question of how the nature of the noise influences the results is addressed theoretically and numerically. By invoking the Martingale property, we rationalize the transformation of the underlying Stratonovich form of the model to an Ito form, independent of whether the noise is additive or multiplicative. The generality of the analysis is demonstrated by developing it both for a Brownian particle moving in a periodically forced quartic potential, which acts as a simple model of stochastic resonance, as well as for our more complex climate physics model. The validity of the approach is shown by comparison with numerical solutions. The particular climate dynamics problem upon which we focus involves a low-order model for the evolution of Arctic sea ice under the influence of increasing greenhouse gas forcing ΔF{sub 0}. The deterministic model, developed by Eisenman and Wettlaufer [“Nonlinear threshold behavior during the loss of Arctic sea ice,” Proc. Natl. Acad. Sci. U.S.A. 106(1), 28–32 (2009)] exhibits several transitions as ΔF{sub 0} increases and the stochastic analysis is used to understand the manner in which noise influences these transitions and the stability of the system.

  19. Hair-based sensors for micro-autonomous systems

    NASA Astrophysics Data System (ADS)

    Sadeghi, Mahdi M.; Peterson, Rebecca L.; Najafi, Khalil

    2012-06-01

    We seek to harness microelectromechanical systems (MEMS) technologies to build biomimetic devices for low-power, high-performance, robust sensors and actuators on micro-autonomous robot platforms. Hair is used abundantly in nature for a variety of functions including balance and inertial sensing, flow sensing and aerodynamic (air foil) control, tactile and touch sensing, insulation and temperature control, particle filtering, and gas/chemical sensing. Biological hairs, which are typically characterized by large surface/volume ratios and mechanical amplification of movement, can be distributed in large numbers over large areas providing unprecedented sensitivity, redundancy, and stability (robustness). Local neural transduction allows for space- and power-efficient signal processing. Moreover by varying the hair structure and transduction mechanism, the basic hair form can be used for a wide diversity of functions. In this paper, by exploiting a novel wafer-level, bubble-free liquid encapsulation technology, we make arrays of micro-hydraulic cells capable of electrostatic actuation and hydraulic amplification, which enables high force/high deflection actuation and extremely sensitive detection (sensing) at low power. By attachment of cilia (hair) to the micro-hydraulic cell, air flow sensors with excellent sensitivity (< few cm/s) and dynamic range (> 10 m/s) have been built. A second-generation design has significantly reduced the sensor response time while maintaining sensitivity of about 2 cm/s and dynamic range of more than 15 m/s. These sensors can be used for dynamic flight control of flying robots or for situational awareness in surveillance applications. The core biomimetic technologies developed are applicable to a broad range of sensors and actuators.

  20. Remote wave measurements using autonomous mobile robotic systems

    NASA Astrophysics Data System (ADS)

    Kurkin, Andrey; Zeziulin, Denis; Makarov, Vladimir; Belyakov, Vladimir; Tyugin, Dmitry; Pelinovsky, Efim

    2016-04-01

    The project covers the development of a technology for monitoring and forecasting the state of the coastal zone environment using radar equipment transported by autonomous mobile robotic systems (AMRS). Sought-after areas of application are the eastern and northern coasts of Russia, where continuous collection of information on topographic changes of the coastal zone and carrying out hydrodynamic measurements in inaccessible to human environment are needed. The intensity of the reflection of waves, received by radar surveillance, is directly related to the height of the waves. Mathematical models and algorithms for processing experimental data (signal selection, spectral analysis, wavelet analysis), recalculation of landwash from data on heights of waves far from the shore, determination of the threshold values of heights of waves far from the shore have been developed. There has been developed the program complex for functioning of the experimental prototype of AMRS, comprising the following modules: data loading module, reporting module, module of georeferencing, data analysis module, monitoring module, hardware control module, graphical user interface. Further work will be connected with carrying out tests of manufactured experimental prototype in conditions of selected routes coastline of Sakhalin Island. Conducting field tests will allow to reveal the shortcomings of development and to identify ways of optimization of the structure and functioning algorithms of AMRS, as well as functioning the measuring equipment. The presented results have been obtained in Nizhny Novgorod State Technical University n.a. R. Alekseev in the framework of the Federal Target Program «Research and development on priority directions of scientific-technological complex of Russia for 2014 - 2020 years» (agreement № 14.574.21.0089 (unique identifier of agreement - RFMEFI57414X0089)).