Science.gov

Sample records for autonomous vision system

  1. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  2. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  3. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  4. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-06-03

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.

  5. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    PubMed Central

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  6. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    PubMed Central

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  7. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  8. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  9. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  10. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  11. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  12. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  13. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-12-12

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

  14. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  15. Enhanced and synthetic vision system for autonomous all weather approach and landing

    NASA Astrophysics Data System (ADS)

    Korn, Bernd R.

    2007-04-01

    Within its research project ADVISE-PRO (Advanced visual system for situation awareness enhancement - prototype, 2003 - 2006) that will be presented in this contribution, DLR has combined elements of Enhanced Vision and Synthetic Vision to one integrated system to allow all low visibility operations independently from the infrastructure on ground. The core element of this system is the adequate fusion of all information that is available on-board. This fusion process is organized in a hierarchical manner. The most important subsystems are a) the sensor based navigation which determines the aircraft's position relative to the runway by automatically analyzing sensor data (MMW, IR, radar altimeter) without using neither (D)GPS nor precise knowledge about the airport geometry, b) an integrity monitoring of navigation data and terrain data which verifies on-board navigation data ((D)GPS + INS) with sensor data (MMW-Radar, IR-Sensor, Radar altimeter) and airport / terrain databases, c) an obstacle detection system and finally d) a consistent description of situation and respective HMI for the pilot.

  16. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  17. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  18. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  19. Infrared sensors and systems for enhanced vision/autonomous landing applications

    NASA Technical Reports Server (NTRS)

    Kerr, J. Richard

    1993-01-01

    There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.

  20. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.

  1. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  2. Real-time performance of a hands-free semi-autonomous wheelchair system using a combination of stereoscopic and spherical vision.

    PubMed

    Nguyen, Jordan S; Nguyen, Tuan Nghia; Tran, Yvonne; Su, Steven W; Craig, Ashley; Nguyen, Hung T

    2012-01-01

    This paper is concerned with the operational performance of a semi-autonomous wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. The unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. Combining this vision system with a shared control strategy provides intelligent assistive guidance during wheelchair navigation, and can accompany any hands-free wheelchair control technology for people with severe physical disability. Testing of this system in crowded dynamic environments has displayed the feasibility and real-time performance of this system when assisting hands-free control technologies, in this case being a proof-of-concept brain-computer interface (BCI).

  3. Laboratory Experimentation of Autonomous Spacecraft Docking Using Cooperative Vision Navigation

    DTIC Science & Technology

    2005-12-01

    EXPERIMENTATION OF AUTONOMOUS SPACECRAFT DOCKING USING COOPERATIVE VISION NAVIGATION by David A. Friedman December 2005 Thesis Advisor...Experimentation of Autonomous Spacecraft Docking Using Cooperative Vision Navigation 6. AUTHOR(S) David A. Friedman 5. FUNDING NUMBERS 7...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) On-orbit, autonomous docking and spacecraft servicing are key areas

  4. INL Autonomous Navigation System

    SciTech Connect

    2005-03-30

    The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.

  5. Autonomic Nervous System Disorders

    MedlinePlus

    Your autonomic nervous system is the part of your nervous system that controls involuntary actions, such as the beating of your heart ... breathing and swallowing Erectile dysfunction in men Autonomic nervous system disorders can occur alone or as the result ...

  6. Three-dimensional vision sensors for autonomous robots

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takashi; Okabayashi, Keizyu; Wakitani, Jun

    1993-09-01

    A three dimensional measurement system, which is important for developing autonomous robots is described. Industrial robots used in today's plants are of the preprogrammed teaching playback type. It is necessary to develop autonomous robots which can work based on sensor information for intelligent manufacturing systems. Moreover, practical use of robots which work in unstructured environments such as outdoors and in space is expected. To realize this, a function to measure objects and the environment three-dimensionally is a key technology. Additional important requirements for robotic sensors are real-time processing and compactness. We have developed smart 3-D vision sensors for the purpose of realizing autonomous robots. These are two kinds of sensors with different functions corresponding to the application. One is a slitted light range finder ( SLRF ) to measure stationary objects. The other is a real-time tracking vision ( RTTV ) which can measure moving objects at high speed. SLRF uses multiple slitted lights which are generated by a semiconductor laser through an interference filter and a cylindrical lens. Furthermore, we developed a liquid crystal shutter with multiple electrodes. We devised a technique to make coded slitted light by putting this shutter in front of the light source. As a result, using the principle of triangulation, objects can be measured in three dimensions. In addition, high-speed image input was enabled by projecting multiple slitted light at the same time. We have confirmed the effectiveness of the SLRF applied to a hand-eye system using a robot.

  7. Vision-Based Autonomous Sensor-Tasking in Uncertain Adversarial Environments

    DTIC Science & Technology

    2015-01-02

    registration technique with an applica- tion to stereo vision. Proceedings of Imaging Understanding Workshop, pages 121–130, 1981. [17] S. P. Meyn, A...forecast activities, and analyze complex scenes with multiple interacting entities. Specific applications include autonomous aerial surveillance...Specific applications include autonomous aerial surveillance systems that cover broad areas of military operations, camera security sys- tems that cover

  8. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  9. Autonomous robot vision software design using Matlab toolboxes

    NASA Astrophysics Data System (ADS)

    Tedder, Maurice; Chung, Chan-Jin

    2004-10-01

    The purpose of this paper is to introduce a cost-effective way to design robot vision and control software using Matlab for an autonomous robot designed to compete in the 2004 Intelligent Ground Vehicle Competition (IGVC). The goal of the autonomous challenge event is for the robot to autonomously navigate an outdoor obstacle course bounded by solid and dashed lines on the ground. Visual input data is provided by a DV camcorder at 160 x 120 pixel resolution. The design of this system involved writing an image-processing algorithm using hue, satuaration, and brightness (HSB) color filtering and Matlab image processing functions to extract the centroid, area, and orientation of the connected regions from the scene. These feature vectors are then mapped to linguistic variables that describe the objects in the world environment model. The linguistic variables act as inputs to a fuzzy logic controller designed using the Matlab fuzzy logic toolbox, which provides the knowledge and intelligence component necessary to achieve the desired goal. Java provides the central interface to the robot motion control and image acquisition components. Field test results indicate that the Matlab based solution allows for rapid software design, development and modification of our robot system.

  10. Computer vision sensor for autonomous helicopter hover stabilization

    NASA Astrophysics Data System (ADS)

    Oertel, Carl-Henrik

    1997-06-01

    Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. A computer-vision based system, which is able to observe the helicopter flight state during hover and low speed, based on the detection and tracking of significant but arbitrary features, has been developed by the Institute of Flight Mechanics of DLR Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. The approach is as follows: A CCD camera looks straight downward to the ground and produces an image of the ground view. The digitized video signal is fed into a high performance on- board computer which looks for distinctive features in the image. Any motion of the helicopter results in movements of these patterns in the camera image. By tracking the distinctive features during the succession of incoming images and by the support of inertial sensor data, it is possible to calculate all necessary helicopter state variables, which are needed for a position hold control algorithm. This information is gained from a state variable observer. That means that no additional information about the appearance of the camera view has to be known in advance to achieve autonomous helicopter hover stabilization. The hardware architecture for this image evaluation system mainly consists of several PowerPC processors which communicate with the aid of transputers and an image distribution bus. Feature tracking is performed by a dedicated 2D-correlator subsystem. The paper presents the characteristics of the computer vision sensor and demonstrates its functionality.

  11. An autonomous vision-based mobile robot

    NASA Astrophysics Data System (ADS)

    Baumgartner, Eric Thomas

    This dissertation describes estimation and control methods for use in the development of an autonomous mobile robot for structured environments. The navigation of the mobile robot is based on precise estimates of the position and orientation of the robot within its environment. The extended Kalman filter algorithm is used to combine information from the robot's drive wheels with periodic observations of small, wall-mounted, visual cues to produce the precise position and orientation estimates. The visual cues are reliably detected by at least one video camera mounted on the mobile robot. Typical position estimates are accurate to within one inch. A path tracking algorithm is also developed to follow desired reference paths which are taught by a human operator. Because of the time-independence of the tracking algorithm, the speed that the vehicle travels along the reference path is specified independent from the tracking algorithm. The estimation and control methods have been applied successfully to two experimental vehicle systems. Finally, an analysis of the linearized closed-loop control system is performed to study the behavior and the stability of the system as a function of various control parameters.

  12. A Robot Vision System.

    DTIC Science & Technology

    1985-12-01

    ix e ...... . . . . . . .. . - . 1 I. Introduction This project includes the design and implementation of a vision - based goal achievement system. The... vision system design base . Final Conclusions Stereo vision is useless beyond about 15 feet for the camera separation of .75 feet, a picture...model. Such monocular vision and modelling, duplicated for two cameras, would give a second source of model data for resolving ambiguities, and

  13. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  14. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  15. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  16. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Simpson, James

    2010-01-01

    The Autonomous Flight Safety System (AFSS) is an independent self-contained subsystem mounted onboard a launch vehicle. AFSS has been developed by and is owned by the US Government. Autonomously makes flight termination/destruct decisions using configurable software-based rules implemented on redundant flight processors using data from redundant GPS/IMU navigation sensors. AFSS implements rules determined by the appropriate Range Safety officials.

  17. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  18. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1986-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  19. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1989-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  20. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    PubMed

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2017-03-22

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  1. Autonomous Vision-Based Tethered-Assisted Rover Docking

    NASA Technical Reports Server (NTRS)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  2. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  3. An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing

    NASA Astrophysics Data System (ADS)

    Zhao, Yunji; Pei, Hailong

    In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.

  4. Micro autonomous robotic system

    NASA Astrophysics Data System (ADS)

    Ishihara, Hidenori; Fukuda, Toshio

    1995-12-01

    This paper deals with the structural proposal of the micro autonomous robotic system, and shows the design of the prototype. We aim at developing the micro robot, which autonomously acts based on its detection, in order to propose a solution to constitute the micro autonomous robotic system. However, as miniaturizing the size, the number of the sensors gets restricted and the information from them becomes lack. Lack of the information makes it difficult to realize an intelligence of quality. Because of that, the micro robotic system needs to develop the simple algorithm. In this paper, we propose the simply logical algorithms to control the actuator, and show the performance of the micro robot controlled by them, and design the Micro Line Trace Robot, which dimension is about 1 cm cube and which moves along the black line on the white-colored ground, and the programmable micro autonomous robot, which dimension is about 2 cm cube and which performs according to the program optionally.

  5. Autonomous Robotic Following Using Vision Based Techniques

    DTIC Science & Technology

    2005-02-03

    different methods for the soldier’s control of the vehicle are being investigated. One such method is the Leader - Follower approach. In the Field So...what is the current state of the art for leader - follower applications? One of the leaders in this field is the RF ATD (Robotic Follower Advanced...these systems have in common? Both of these platforms are representative of the state-of-the-art of current leader - follower technology being tested by

  6. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  7. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  8. Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter

    NASA Technical Reports Server (NTRS)

    Rock, Stephen M.

    1999-01-01

    This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.

  9. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Santuro, Steve; Simpson, James; Zoerner, Roger; Bull, Barton; Lanzi, Jim

    2004-01-01

    Autonomous Flight Safety System (AFSS) is an independent flight safety system designed for small to medium sized expendable launch vehicles launching from or needing range safety protection while overlying relatively remote locations. AFSS replaces the need for a man-in-the-loop to make decisions for flight termination. AFSS could also serve as the prototype for an autonomous manned flight crew escape advisory system. AFSS utilizes onboard sensors and processors to emulate the human decision-making process using rule-based software logic and can dramatically reduce safety response time during critical launch phases. The Range Safety flight path nominal trajectory, its deviation allowances, limit zones and other flight safety rules are stored in the onboard computers. Position, velocity and attitude data obtained from onboard global positioning system (GPS) and inertial navigation system (INS) sensors are compared with these rules to determine the appropriate action to ensure that people and property are not jeopardized. The final system will be fully redundant and independent with multiple processors, sensors, and dead man switches to prevent inadvertent flight termination. AFSS is currently in Phase III which includes updated algorithms, integrated GPS/INS sensors, large scale simulation testing and initial aircraft flight testing.

  10. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  11. Visual navigation system for autonomous indoor blimps

    NASA Astrophysics Data System (ADS)

    Campos, Mario F.; de Souza Coelho, Lucio

    1999-07-01

    Autonomous dirigibles - aerial robots that are a blimp controlled by computer based on information gathered by sensors - are a new and promising research field in Robotics, offering several original work opportunities. One of them is the study of visual navigation of UAVs. In the work described in this paper, a Computer Vision and Control system was developed to perform automatically very simple navigation task for a small indoor blimp. The vision system is able to track artificial visual beacons - objects with known geometrical properties - and from them a geometrical methodology can extract information about orientation of the blimp. The tracking of natural landmarks is also a possibility for the vision technique developed. The control system uses that data to keep the dirigible on a programmed orientation. Experimental results showing the correct and efficient functioning of the system are shown and have your implications and future possibilities discussed.

  12. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  13. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  14. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  15. Intelligent Mobile Autonomous System

    DTIC Science & Technology

    1987-01-01

    jerk application. (c) Negative jerk application. Group (a). Application of positve jerk. Force is increased from initial value to force of resistance...fundamentals of the new emerging area of autonomous robotics . The goal of this research is to develop a theory of design and functioning of Intelligent...scientific research. This report contributes to a new rapidly developing area of autonomous robotics . Actual experience of dealing with autonomous robots (or

  16. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  17. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  18. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2016-12-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  19. Cybersecurity for aerospace autonomous systems

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.

  20. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  1. Autonomous power system brassboard

    NASA Technical Reports Server (NTRS)

    Merolla, Anthony

    1992-01-01

    The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the

  2. Asteroid Exploration with Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Rash, James; Rouff, Christopher; Hinchey, Mike

    2004-01-01

    NASA is studying advanced technologies for a future robotic exploration mission to the asteroid belt. The prospective ANTS (Autonomous Nano Technology Swarm) mission comprises autonomous agents including worker agents (small spacecra3) designed to cooperate in asteroid exploration under the overall authoriq of at least one ruler agent (a larger spacecraft) whose goal is to cause science data to be returned to Earth. The ANTS team (ruler plus workers and messenger agents), but not necessarily any individual on the team, will exhibit behaviors that qualify it as an autonomic system, where an autonomic system is defined as a system that self-reconfigures, self-optimizes, self-heals, and self-protects. Autonomic system concepts lead naturally to realistic, scalable architectures rich in capabilities and behaviors. In-depth consideration of a major mission like ANTS in terms of autonomic systems brings new insights into alternative definitions of autonomic behavior. This paper gives an overview of the ANTS mission and discusses the autonomic properties of the mission.

  3. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  4. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  5. Awareness and Responsibility in Autonomous Weapons Systems

    NASA Astrophysics Data System (ADS)

    Bhuta, Nehal; Rotolo, Antonino; Sartor, Giovanni

    The following sections are included: * Introduction * Why Computational Awareness is Important in Autonomous Weapons * Flying Drones and Other Autonomous Weapons * The Impact of Autonomous Weapons Systems * From Autonomy to Awareness: A Perspective from Science Fiction * Summary and Conclusions

  6. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  7. Gas House Autonomous System Monitoring

    NASA Technical Reports Server (NTRS)

    Miller, Luke; Edsall, Ashley

    2015-01-01

    Gas House Autonomous System Monitoring (GHASM) will employ Integrated System Health Monitoring (ISHM) of cryogenic fluids in the High Pressure Gas Facility at Stennis Space Center. The preliminary focus of development incorporates the passive monitoring and eventual commanding of the Nitrogen System. ISHM offers generic system awareness, adept at using concepts rather than specific error cases. As an enabler for autonomy, ISHM provides capabilities inclusive of anomaly detection, diagnosis, and abnormality prediction. Advancing ISHM and Autonomous Operation functional capabilities enhances quality of data, optimizes safety, improves cost effectiveness, and has direct benefits to a wide spectrum of aerospace applications.

  8. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    NASA Astrophysics Data System (ADS)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  9. Contingency Software in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn; Patterson-Hine, Ann

    2006-01-01

    This viewgraph presentation reviews the development of contingency software for autonomous systems. Autonomous vehicles currently have a limited capacity to diagnose and mitigate failures. There is a need to be able to handle a broader range of contingencies. The goals of the project are: 1. Speed up diagnosis and mitigation of anomalous situations.2.Automatically handle contingencies, not just failures.3.Enable projects to select a degree of autonomy consistent with their needs and to incrementally introduce more autonomy.4.Augment on-board fault protection with verified contingency scripts

  10. Measures of Autonomic Nervous System

    DTIC Science & Technology

    2011-04-01

    Gastro- intestinal Pupillary Response Respiratory Salivary Amylase Vascular Manipulative Body-Based/ Tension-Release Practices Trauma...Physiological Activities ANS Physiological Activities Cardiac Pupillary Response Catecholamines Respiration Cortisol Salivary Amylase Galvanic Skin...Measures of Autonomic Nervous System Regulation Salivary Amylase Measurement Most measures of salivary amylase

  11. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  12. Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball

    PubMed Central

    Lin, Junqin; Han, Baoling; Luo, Qingsheng

    2015-01-01

    This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. PMID:26057040

  13. Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball.

    PubMed

    Lin, Junqin; Han, Baoling; Luo, Qingsheng

    2015-06-05

    This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover.

  14. Semi autonomous mine detection system

    NASA Astrophysics Data System (ADS)

    Few, Doug; Versteeg, Roelof; Herman, Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIK was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude - from an autonomous robotic perspective - the rapid development and deployment of fieldable systems.

  15. Semi autonomous mine detection system

    SciTech Connect

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIK was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.

  16. Spaceborne autonomous multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Fernquist, Alan

    1990-01-01

    The goal of this task is to provide technology for the specification and integration of advanced processors into the Space Station Freedom data management system environment through computer performance measurement tools, simulators, and an extended testbed facility. The approach focuses on five categories: (1) user requirements--determine the suitability of existing computer technologies and systems for real-time requirements of NASA missions; (2) system performance analysis--characterize the effects of languages, architectures, and commercially available hardware on real-time benchmarks; (3) system architecture--expand NASA's capability to solve problems with integrated numeric and symbolic requirements using advanced multiprocessor architectures; (4) parallel Ada technology--extend Ada software technology to utilize parallel architectures more efficiently; and (5) testbed--extend in-house testbed to support system performance and system analysis studies.

  17. A vision system for an unmanned nonlethal weapon

    NASA Astrophysics Data System (ADS)

    Kogut, Greg; Drymon, Larry

    2004-10-01

    Unmanned weapons remove humans from deadly situations. However some systems, such as unmanned guns, are difficult to control remotely. It is difficult for a soldier to perform the complex tasks of identifying and aiming at specific points on targets from a remote location. This paper describes a computer vision and control system for providing autonomous control of unmanned guns developed at Space and Naval Warfare Systems Center, San Diego (SSC San Diego). The test platform, consisting of a non-lethal gun mounted on a pan-tilt mechanism, can be used as an unattended device or mounted on a robot for mobility. The system operates with a degree of autonomy determined by a remote user that ranges from teleoperated to fully autonomous. The teleoperated mode consists of remote joystick control over all aspects of the weapon, including aiming, arming, and firing. Visual feedback is provided by near-real-time video feeds from bore-site and wide-angle cameras. The semi-autonomous mode provides the user with tracking information overlayed over the real-time video. This provides the user with information on all detected targets being tracked by the vision system. The user uses a mouse to select a target, and the gun automatically aims the gun at the target. Arming and firing is still performed by teleoperation. In fully autonomous mode, all aspects of gun control are performed by the vision system.

  18. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  19. Adaptive estimation and control with application to vision-based autonomous formation flight

    NASA Astrophysics Data System (ADS)

    Sattigeri, Ramachandra

    2007-05-01

    Modern Unmanned Aerial Vehicles (UAVs) are equipped with vision sensors because of their light-weight, low-cost characteristics and also their ability to provide a rich variety of information of the environment in which the UAVs are navigating in. The problem of vision based autonomous flight is very difficult and challenging since it requires bringing together concepts from image processing and computer vision, target tracking and state estimation, and flight guidance and control. This thesis focuses on the adaptive state estimation, guidance and control problems involved in vision-based formation flight. Specifically, the thesis presents a composite adaptation approach to the partial state estimation of a class of nonlinear systems with unmodeled dynamics. In this approach, a linear time-varying Kalman filter is the nominal state estimator which is augmented by the output of an adaptive neural network (NN) that is trained with two error signals. The benefit of the proposed approach is in its faster and more accurate adaptation to the modeling errors over a conventional approach. The thesis also presents two approaches to the design of adaptive guidance and control (G&C) laws for line-of-sight formation flight. In the first approach, the guidance and autopilot systems are designed separately and then combined together by assuming time-scale separation. The second approach is based on integrating the guidance and autopilot design process. The developed G&C laws using both approaches are adaptive to unmodeled leader aircraft acceleration and to own aircraft aerodynamic uncertainties. The thesis also presents theoretical justification based on Lyapunov-like stability analysis for integrating the adaptive state estimation and adaptive G&C designs. All the developed designs are validated in nonlinear, 6DOF fixed-wing aircraft simulations. Finally, the thesis presents a decentralized coordination strategy for vision-based multiple-aircraft formation control. In this

  20. Multi-agent autonomous system

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.

  1. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  2. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; Doggett, Thomas; Ip, Felipe; Greeley, Ron; Baker, Victor; Dohn, James; Boyer, Darrell

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  3. A Vision-Based Trajectory Controller for Autonomous Cleaning Robots

    NASA Astrophysics Data System (ADS)

    Gerstmayr, Lorenz; Röben, Frank; Krzykawski, Martin; Kreft, Sven; Venjakob, Daniel; Möller, Ralf

    Autonomous cleaning robots should completely cover the accessible area with minimal repeated coverage. We present a mostly visionbased navigation strategy for systematical exploration of an area with meandering lanes. The results of the robot experiments show that our approach can guide the robot along parallel lanes while achieving a good coverage with only a small proportion of repeated coverage. The proposed method can be used as a building block for more elaborated navigation strategies which allow the robot to systematically clean rooms with a complex workspace shape.

  4. The Autonomous Pathogen Detection System

    SciTech Connect

    Dzenitis, J M; Makarewicz, A J

    2009-01-13

    We developed, tested, and now operate a civilian biological defense capability that continuously monitors the air for biological threat agents. The Autonomous Pathogen Detection System (APDS) collects, prepares, reads, analyzes, and reports results of multiplexed immunoassays and multiplexed PCR assays using Luminex{copyright} xMAP technology and flow cytometer. The mission we conduct is particularly demanding: continuous monitoring, multiple threat agents, high sensitivity, challenging environments, and ultimately extremely low false positive rates. Here, we introduce the mission requirements and metrics, show the system engineering and analysis framework, and describe the progress to date including early development and current status.

  5. A Vision System For Robotic Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, Chu X.; Marapane, Suresh

    1988-03-01

    New generation of robotic systems will operate in complex, unstructured environments of industrial plants utilizing sophisticated sensory mechanisms. In this paper we consider development of autonomous robotic systems for various inspection and manipulation tasks associated with advanced nuclear power plants. Our approach in the development of the robotic system is to utilize an array of sensors capable of sensing the robot's environment in several sensory modalities. One of the most important sensor modality utilized is that of vision. We describe the development of a model-based vision system for performing a number of inspection and manipulation tasks. The system is designed and tested using a laboratory based test panel. A number of analog and digital meters and a variety of switches, valves and controls are mounted on the panel. The paper presents details of system design and development and a series of experiments performed to evaluate capabilities of the vision system.

  6. APDS: Autonomous Pathogen Detection System

    SciTech Connect

    Langlois, R G; Brown, S; Burris, L; Colston, B; Jones, L; Makarewicz, T; Mariella, R; Masquelier, D; McBride, M; Milanovich, F; Masarabadi, S; Venkateswaran, K; Marshall, G; Olson, D; Wolcott, D

    2002-02-14

    An early warning system to counter bioterrorism, the Autonomous Pathogen Detection System (APDS) continuously monitors the environment for the presence of biological pathogens (e.g., anthrax) and once detected, it sounds an alarm much like a smoke detector warns of a fire. Long before September 11, 2001, this system was being developed to protect domestic venues and events including performing arts centers, mass transit systems, major sporting and entertainment events, and other high profile situations in which the public is at risk of becoming a target of bioterrorist attacks. Customizing off-the-shelf components and developing new components, a multidisciplinary team developed APDS, a stand-alone system for rapid, continuous monitoring of multiple airborne biological threat agents in the environment. The completely automated APDS samples the air, prepares fluid samples in-line, and performs two orthogonal tests: immunoassay and nucleic acid detection. When compared to competing technologies, APDS is unprecedented in terms of flexibility and system performance.

  7. Autonomous pathogen detection system 2001

    SciTech Connect

    Langlois, R G; Wang, A; Colston, B; Masquelier, D; Jones, L; Venkateswaran, K S; Nasarabadi, S; Brown, S; Ramponi, A; Milanovich, F P

    2001-01-09

    The objective of this project is to design, fabricate and field-demonstrate a fully Autonomous Pathogen Detector (identifier) System (APDS). This will be accomplished by integrating a proven flow cytometer and real-time polymerase chain reaction (PCR) detector with sample collection, sample preparation and fluidics to provide a compact, autonomously operating instrument capable of simultaneously detecting multiple pathogens and/or toxins. The APDS will be designed to operate in fixed locations, where it continuously monitors air samples and automatically reports the presence of specific biological agents. The APDS will utilize both multiplex immuno and nucleic acid assays to provide ''quasi-orthogonal'', multiple agent detection approaches to minimize false positives and increase the reliability of identification. Technical advancements across several fronts must first be made in order to realize the full extent of the APDS. Commercialization will be accomplished through three progressive generations of instruments. The APDS is targeted for domestic applications in which (1) the public is at high risk of exposure to covert releases of bioagent such as in major subway systems and other transportation terminals, large office complexes, and convention centers; and (2) as part of a monitoring network of sensors integrated with command and control systems for wide area monitoring of urban areas and major gatherings (e.g., inaugurations, Olympics, etc.). In this latter application there is potential that a fully developed APDS could add value to Defense Department monitoring architectures.

  8. Autonomous Biological System (ABS) experiments.

    PubMed

    MacCallum, T K; Anderson, G A; Poynter, J E; Stodieck, L S; Klaus, D M

    1998-12-01

    Three space flight experiments have been conducted to test and demonstrate the use of a passively controlled, materially closed, bioregenerative life support system in space. The Autonomous Biological System (ABS) provides an experimental environment for long term growth and breeding of aquatic plants and animals. The ABS is completely materially closed, isolated from human life support systems and cabin atmosphere contaminants, and requires little need for astronaut intervention. Testing of the ABS marked several firsts: the first aquatic angiosperms to be grown in space; the first higher organisms (aquatic invertebrate animals) to complete their life cycles in space; the first completely bioregenerative life support system in space; and, among the first gravitational ecology experiments. As an introduction this paper describes the ABS, its flight performance, advantages and disadvantages.

  9. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  10. Dynamical Systems and Motion Vision.

    DTIC Science & Technology

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  11. Research in Computer Vision for Autonomous Systems

    DTIC Science & Technology

    1988-09-15

    atmospheric conditions are less than ideal. Notwithstanding this concern, it remains that FLIR being passive is an excellent sensor for monitoring...04 2.7613e-04 5.1136c-04 nI -3.0674e-05 2.2530e-05 -i .6946c-05 n02 8.4790e-04 1.6227e-03 5.9258e-04 n30 -2.2320e-06 -1.0746c-06 -2.5540e-06 n21...predetermined selection strategy to determine which rules in the conflict set will actually be executed. These rules are then fired in the EXECUTE state, which

  12. Testbed for an autonomous system

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok K.; Larsen, Ronald L.

    1989-01-01

    In previous works we have defined a general architectural model for autonomous systems, which can easily be mapped to describe the functions of any automated system (SDAG-86-01), and we illustrated that model by applying it to the thermal management system of a space station (SDAG-87-01). In this note, we will further develop that application and design the detail of the implementation of such a model. First we present the environment of our application by describing the thermal management problem and an abstraction, which was called TESTBED, that includes a specific function for each module in the architecture, and the nature of the interfaces between each pair of blocks.

  13. An Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Bull, James B.; Lanzi, Raymond J.

    2007-01-01

    The Autonomous Flight Safety System (AFSS) being developed by NASA s Goddard Space Flight Center s Wallops Flight Facility and Kennedy Space Center has completed two successful developmental flights and is preparing for a third. AFSS has been demonstrated to be a viable architecture for implementation of a completely vehicle based system capable of protecting life and property in event of an errant vehicle by terminating the flight or initiating other actions. It is capable of replacing current human-in-the-loop systems or acting in parallel with them. AFSS is configured prior to flight in accordance with a specific rule set agreed upon by the range safety authority and the user to protect the public and assure mission success. This paper discusses the motivation for the project, describes the method of development, and presents an overview of the evolving architecture and the current status.

  14. Information for Successful Interaction with Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Johnson, Kathy A.

    2003-01-01

    Interaction in heterogeneous mission operations teams is not well matched to classical models of coordination with autonomous systems. We describe methods of loose coordination and information management in mission operations. We describe an information agent and information management tool suite for managing information from many sources, including autonomous agents. We present an integrated model of levels of complexity of agent and human behavior, which shows types of information processing and points of potential error in agent activities. We discuss the types of information needed for diagnosing problems and planning interactions with an autonomous system. We discuss types of coordination for which designs are needed for autonomous system functions.

  15. Autonomous power system intelligent diagnosis and control

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony

    1991-01-01

    The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.

  16. Autonomic nervous system and immune system interactions.

    PubMed

    Kenney, M J; Ganta, C K

    2014-07-01

    The present review assesses the current state of literature defining integrative autonomic-immune physiological processing, focusing on studies that have employed electrophysiological, pharmacological, molecular biological, and central nervous system experimental approaches. Central autonomic neural networks are informed of peripheral immune status via numerous communicating pathways, including neural and non-neural. Cytokines and other immune factors affect the level of activity and responsivity of discharges in sympathetic and parasympathetic nerves innervating diverse targets. Multiple levels of the neuraxis contribute to cytokine-induced changes in efferent parasympathetic and sympathetic nerve outflows, leading to modulation of peripheral immune responses. The functionality of local sympathoimmune interactions depends on the microenvironment created by diverse signaling mechanisms involving integration between sympathetic nervous system neurotransmitters and neuromodulators; specific adrenergic receptors; and the presence or absence of immune cells, cytokines, and bacteria. Functional mechanisms contributing to the cholinergic anti-inflammatory pathway likely involve novel cholinergic-adrenergic interactions at peripheral sites, including autonomic ganglion and lymphoid targets. Immune cells express adrenergic and nicotinic receptors. Neurotransmitters released by sympathetic and parasympathetic nerve endings bind to their respective receptors located on the surface of immune cells and initiate immune-modulatory responses. Both sympathetic and parasympathetic arms of the autonomic nervous system are instrumental in orchestrating neuroimmune processes, although additional studies are required to understand dynamic and complex adrenergic-cholinergic interactions. Further understanding of regulatory mechanisms linking the sympathetic nervous, parasympathetic nervous, and immune systems is critical for understanding relationships between chronic disease

  17. Autonomous navigation system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  18. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  19. System Engineering of Autonomous Space Vehicles

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis

    2014-01-01

    Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.

  20. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  1. Autonomous Operations System: Development and Application

    NASA Technical Reports Server (NTRS)

    Toro Medina, Jaime A.; Wilkins, Kim N.; Walker, Mark; Stahl, Gerald M.

    2016-01-01

    Autonomous control systems provides the ability of self-governance beyond the conventional control system. As the complexity of mechanical and electrical systems increases, there develops a natural drive for developing robust control systems to manage complicated operations. By closing the bridge between conventional automated systems to knowledge based self-awareness systems, nominal control of operations can evolve into relying on safe critical mitigation processes to support any off-nominal behavior. Current research and development efforts lead by the Autonomous Propellant Loading (APL) group at NASA Kennedy Space Center aims to improve cryogenic propellant transfer operations by developing an automated control and health monitoring system. As an integrated systems, the center aims to produce an Autonomous Operations System (AOS) capable of integrating health management operations with automated control to produce a fully autonomous system.

  2. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  3. Musca domestica inspired machine vision system with hyperacuity

    NASA Astrophysics Data System (ADS)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  4. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  5. The Secure, Transportable, Autonomous Reactor System

    SciTech Connect

    Brown, N.W.; Hassberger, J.A.; Smith, C.; Carelli, M.; Greenspan, E.; Peddicord, K.L.; Stroh, K.; Wade, D.C.; Hill, R.N.

    1999-05-27

    The Secure, Transportable, Autonomous Reactor (STAR) system is a development architecture for implementing a small nuclear power system, specifically aimed at meeting the growing energy needs of much of the developing world. It simultaneously provides very high standards for safety, proliferation resistance, ease and economy of installation, operation, and ultimate disposition. The STAR system accomplishes these objectives through a combination of modular design, factory manufacture, long lifetime without refueling, autonomous control, and high reliability.

  6. Inertial Navigation System Aiding Using Vision

    DTIC Science & Technology

    2013-03-01

    INERTIAL NAVIGATION SYSTEM AIDING USING VISION THESIS James O. Quarmyne, Second Lieutenant, USAF AFIT-ENG-13-M-40 DEPARTMENT OF THE AIR FORCE AIR...protection in the United States AFIT-ENG-13-M-40 INERTIAL NAVIGATION SYSTEM AIDING USING VISION THESIS Presented to the Faculty Department of...AIDING USING VISION James O. Quarmyne, B.S.E.E. Second Lieutenant, USAF Approved: Meir Pachter, PhD (Chairman) Date John F. Raquet, PhD (Committee Member

  7. A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

    NASA Astrophysics Data System (ADS)

    Leishman, Robert C.

    Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control

  8. Development of an autonomous target tracking system

    NASA Astrophysics Data System (ADS)

    Gidda, Venkata Ramaiah

    In recent years, surveillance and border patrol have become one of the key research areas in UAV research. Increase in the computational capability of the computers and embedded electronics, coupled with compatibility of various commercial vision algorithms and commercial off the shelf (COTS) embedded electronics, and has further fuelled the research. The basic task in these applications is perception of environment through the available visual sensors like camera. Visual tracking, as the name implies, is tracking of objects using a camera. The process of autonomous target tracking starts with the selection of the target in a sequence of video frames transmitted from the on-board camera. We use an improved fast dynamic template matching algorithm coupled with Kalman Filter to track the selected target in consecutive video frames. The selected target is saved as a reference template. On the ground station computer, the reference template is overlaid on the live streaming video from the on-board system, starting from the upper left corner of the video frame. The template is slid pixel by pixel over the entire source image. A comparison of the pixels is performed between the template and source image. A confidence value R of the match is calculated at each pixel. Based on the method used to perform the template matching, the best match pixel location is found according to the highest or lowest confidence value R. The best match pixel location is communicated to the on-board gimbal controller over the wireless Xbee network. The software on the controller actuates the pan-tilt servos to continuously to hold the selected target at the center of the video frame. The complete system is a portable control system assembled from commercial off the shelf parts. The tracking system is tested on a target having several motion patterns.

  9. Vehicle autonomous localization in local area of coal mine tunnel based on vision sensors and ultrasonic sensors.

    PubMed

    Xu, Zirui; Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-Il

    2017-01-01

    This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.

  10. Vehicle autonomous localization in local area of coal mine tunnel based on vision sensors and ultrasonic sensors

    PubMed Central

    Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-il

    2017-01-01

    This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel’s global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point’s plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel. PMID:28141829

  11. An introduction to autonomous control systems

    NASA Technical Reports Server (NTRS)

    Antsaklis, Panos J.; Passino, Kevin M.; Wang, S. J.

    1991-01-01

    The functions, characteristics, and benefits of autonomous control are outlined. An autonomous control functional architecture for future space vehicles that incorporates the concepts and characteristics described is presented. The controller is hierarchical, with an execution level (the lowest level), coordination level (middle level), and management and organization level (highest level). The general characteristics of the overall architecture, including those of the three levels, are explained, and an example to illustrate their functions is given. Mathematical models for autonomous systems, including 'logical' discrete event system models, are discussed. An approach to the quantitative, systematic modeling, analysis, and design of autonomous controllers is also discussed. It is a hybrid approach since it uses conventional analysis techniques based on difference and differential equations and new techniques for the analysis of the systems described with a symbolic formalism such as finite automata. Some recent results from the areas of planning and expert systems, machine learning, artificial neural networks, and the area restructurable controls are briefly outlined.

  12. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  13. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  14. Concurrent algorithms for a mobile robot vision system

    SciTech Connect

    Jones, J.P.; Mann, R.C.

    1988-01-01

    The application of computer vision to mobile robots has generally been hampered by insufficient on-board computing power. The advent of VLSI-based general purpose concurrent multiprocessor systems promises to give mobile robots an increasing amount of on-board computing capability, and to allow computation intensive data analysis to be performed without high-bandwidth communication with a remote system. This paper describes the integration of robot vision algorithms on a 3-dimensional hypercube system on-board a mobile robot developed at Oak Ridge National Laboratory. The vision system is interfaced to navigation and robot control software, enabling the robot to maneuver in a laboratory environment, to find a known object of interest and to recognize the object's status based on visual sensing. We first present the robot system architecture and the principles followed in the vision system implementation. We then provide some benchmark timings for low-level image processing routines, describe a concurrent algorithm with load balancing for the Hough transform, a new algorithm for binary component labeling, and an algorithm for the concurrent extraction of region features from labeled images. This system analyzes a scene in less than 5 seconds and has proven to be a valuable experimental tool for research in mobile autonomous robots. 9 refs., 1 fig., 3 tabs.

  15. Intelligent control system of autonomous objects

    NASA Astrophysics Data System (ADS)

    Engel, E. A.; Kovalev, I. V.; Engel, N. E.; Brezitskaya, V. V.; Prohorovich, G. A.

    2017-02-01

    This paper presents an intelligent control system of autonomous objects as framework. The intelligent control framework includes two different layers: a reflexive layer and a reactive layer. The proposed multiagent adaptive fuzzy neuronet combines low-level reaction with high-level reasoning in an intelligent control framework. The formed as the multiagent adaptive fuzzy neuronet the intelligent control system on the base of autonomous object’s state, creates the effective control signal under random perturbations.

  16. Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments

    NASA Astrophysics Data System (ADS)

    Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.

    2011-03-01

    Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve

  17. Autonomous control systems - Architecture and fundamental issues

    NASA Technical Reports Server (NTRS)

    Antsaklis, P. J.; Passino, K. M.; Wang, S. J.

    1988-01-01

    A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).

  18. Comparative anatomy of the autonomic nervous system.

    PubMed

    Nilsson, Stefan

    2011-11-16

    This short review aims to point out the general anatomical features of the autonomic nervous systems of non-mammalian vertebrates. In addition it attempts to outline the similarities and also the increased complexity of the autonomic nervous patterns from fish to tetrapods. With the possible exception of the cyclostomes, perhaps the most striking feature of the vertebrate autonomic nervous system is the similarity between the vertebrate classes. An evolution of the complexity of the system can be seen, with the segmental ganglia of elasmobranchs incompletely connected longitudinally, while well developed paired sympathetic chains are present in teleosts and the tetrapods. In some groups the sympathetic chains may be reduced (dipnoans and caecilians), and have yet to be properly described in snakes. Cranial autonomic pathways are present in the oculomotor (III) and vagus (X) nerves of gnathostome fish and the tetrapods, and with the evolution of salivary and lachrymal glands in the tetrapods, also in the facial (VII) and glossopharyngeal (IX) nerves.

  19. Autonomous underwater pipeline monitoring navigation system

    NASA Astrophysics Data System (ADS)

    Mitchell, Byrel; Mahmoudian, Nina; Meadows, Guy

    2014-06-01

    This paper details the development of an autonomous motion-control and navigation algorithm for an underwater autonomous vehicle, the Ocean Server IVER3, to track long linear features such as underwater pipelines. As part of this work, the Nonlinear and Autonomous Systems Laboratory (NAS Lab) developed an algorithm that utilizes inputs from the vehicles state of the art sensor package, which includes digital imaging, digital 3-D Sidescan Sonar, and Acoustic Doppler Current Profilers. The resulting algorithms should tolerate real-world waterway with episodic strong currents, low visibility, high sediment content, and a variety of small and large vessel traffic.

  20. Open multiagent architecture extended to distributed autonomous robotic systems

    NASA Astrophysics Data System (ADS)

    Sellem, Philippe; Amram, Eric; Luzeaux, Dominique

    2000-07-01

    Our research deals with the design and experiment of a control architecture for an autonomous outdoor mobile robot which uses mainly vision for perception. In this case of a single robot, we have designed a hybrid architecture with an attention mechanism that allows dynamic selection of perception processes. Building on this work, we have developed an open multi-agent architecture, for standard multi-task operating system, using the C++ programming language and Posix threads. Our implementation features of efficient and fully generic messages between agents, automatic acknowledgement receipts and built-in synchronization capabilities. Knowledge is distributed among robots according to a collaborative scheme: every robot builds its own representation of the world and shares it with others. Pieces of information are exchanged when decisions have to be made. Experiments are to be led with two outdoor ActiveMedia Pioneer AT mobile robots. Distributed perception, using mainly vision but also ultrasound, will serve as proof of concept.

  1. Autonomous Attitude Determination System (AADS). Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Saralkar, K.; Frenkel, Y.; Klitsch, G.; Liu, K. S.; Lefferts, E.; Tasaki, K.; Snow, F.; Garrahan, J.

    1982-01-01

    Information necessary to understand the Autonomous Attitude Determination System (AADS) is presented. Topics include AADS requirements, program structure, algorithms, and system generation and execution.

  2. Sensorpedia: Information Sharing Across Autonomous Sensor Systems

    SciTech Connect

    Gorman, Bryan L; Resseguie, David R; Tomkins-Tinch, Christopher H

    2009-01-01

    The concept of adapting social media technologies is introduced as a means of achieving information sharing across autonomous sensor systems. Historical examples of interoperability as an underlying principle in loosely-coupled systems is compared and contrasted with corresponding tightly-coupled, integrated systems. Examples of ad hoc information sharing solutions based on Web 2.0 social networks, mashups, blogs, wikis, and data tags are presented and discussed. The underlying technologies of these solutions are isolated and defined, and Sensorpedia is presented as a formalized application for implementing sensor information sharing across large-scale enterprises with incompatible autonomous sensor systems.

  3. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  4. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  5. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions

    PubMed Central

    Clayton, Byron C.

    2015-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406

  6. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  7. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    NASA Astrophysics Data System (ADS)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  8. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions.

    PubMed

    Clayton, Byron C

    2014-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives.

  9. Far and proximity maneuvers of a constellation of service satellites and autonomous pose estimation of customer satellite using machine vision

    NASA Astrophysics Data System (ADS)

    Arantes, Gilberto, Jr.; Marconi Rocco, Evandro; da Fonseca, Ijar M.; Theil, Stephan

    2010-05-01

    Space robotics has a substantial interest in achieving on-orbit satellite servicing operations autonomously, e.g. rendezvous and docking/berthing (RVD) with customer and malfunctioning satellites. An on-orbit servicing vehicle requires the ability to estimate the position and attitude in situations whenever the targets are uncooperative. Such situation comes up when the target is damaged. In this context, this work presents a robust autonomous pose system applied to RVD missions. Our approach is based on computer vision, using a single camera and some previous knowledge of the target, i.e. the customer spacecraft. A rendezvous analysis mission tool for autonomous service satellite has been developed and presented, for far maneuvers, e.g. distance above 1 km from the target, and close maneuvers. The far operations consist of orbit transfer using the Lambert formulation. The close operations include the inspection phase (during which the pose estimation is computed) and the final approach phase. Our approach is based on the Lambert problem for far maneuvers and the Hill equations are used to simulate and analyze the approaching and final trajectory between target and chase during the last phase of the rendezvous operation. A method for optimally estimating the relative orientation and position between camera system and target is presented in detail. The target is modelled as an assembly of points. The pose of the target is represented by dual quaternion in order to develop a simple quadratic error function in such a way that the pose estimation task becomes a least square minimization problem. The problem of pose is solved and some methods of non-linear square optimization (Newton, Newton-Gauss, and Levenberg-Marquard) are compared and discussed in terms of accuracy and computational cost.

  10. Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments

    NASA Astrophysics Data System (ADS)

    Achtelik, Markus; Bachrach, Abraham; He, Ruijie; Prentice, Samuel; Roy, Nicholas

    2009-05-01

    This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera sensors are both well-suited for recovering the helicopter's relative motion and velocity. Because they use different cues from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor. Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results in this direction, describing the key components for autonomous navigation using either of the two sensors separately.

  11. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  12. Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround

    NASA Astrophysics Data System (ADS)

    Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.

    An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.

  13. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  14. Autonomous Organization-Based Adaptive Information Systems

    DTIC Science & Technology

    2005-01-01

    intentional Multi - agent System (MAS) approach [10]. While these approaches are functional AIS systems, they lack the ability to reorganize and adapt...extended a multi - agent system with a self- reorganizing architecture to create an autonomous, adaptive information system. Design Our organization-based...goals. An advantage of a multi - agent system using the organization theoretic model is its extensibility. The practical, numerical limits to the

  15. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  16. Measures of Autonomic Nervous System Regulation

    DTIC Science & Technology

    2011-04-01

    Cortisol Galvanic Skin Response (GSR) Gastro- intestinal Pupillary Response Respiratory Salivary Amylase Vascular Manipulative Body-Based...Salivary Amylase Galvanic Skin Response Vascular Gastrointestinal The ANS Measures Table in Appendix A provides a summary of over fifty tools...Measures of Autonomic Nervous System Regulation Salivary Amylase Measurement

  17. Autonomous microfluidic system for phosphate detection.

    PubMed

    McGraw, Christina M; Stitzel, Shannon E; Cleary, John; Slater, Conor; Diamond, Dermot

    2007-02-28

    Miniaturization of analytical devices through the advent of microfluidics and micro total analysis systems is an important step forward for applications such as medical diagnostics and environmental monitoring. The development of field-deployable instruments requires that the entire system, including all necessary peripheral components, be miniaturized and packaged in a portable device. A sensor for long-term monitoring of phosphate levels has been developed that incorporates sampling, reagent and waste storage, detection, and wireless communication into a complete, miniaturized system. The device employs a low-power detection and communication system, so the entire instrument can operate autonomously for 7 days on a single rechargeable, 12V battery. In addition, integration of a wireless communication device allows the instrument to be controlled and results to be downloaded remotely. This autonomous system has a limit of detection of 0.3mg/L and a linear dynamic range between 0 and 20mg/L.

  18. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  19. Exercise and the autonomic nervous system.

    PubMed

    Fu, Qi; Levine, Benjamin D

    2013-01-01

    The autonomic nervous system plays a crucial role in the cardiovascular response to acute (dynamic) exercise in animals and humans. During exercise, oxygen uptake is a function of the triple-product of heart rate and stroke volume (i.e., cardiac output) and arterial-mixed venous oxygen difference (the Fick principle). The degree to which each of the variables can increase determines maximal oxygen uptake (V˙O2max). Both "central command" and "the exercise pressor reflex" are important in determining the cardiovascular response and the resetting of the arterial baroreflex during exercise to precisely match systemic oxygen delivery with metabolic demand. In general, patients with autonomic disorders have low levels of V˙O2max, indicating reduced physical fitness and exercise capacity. Moreover, the vast majority of the patients have blunted or abnormal cardiovascular response to exercise, especially during maximal exercise. There is now convincing evidence that some of the protective and therapeutic effects of chronic exercise training are related to the impact on the autonomic nervous system. Additionally, training induced improvement in vascular function, blood volume expansion, cardiac remodeling, insulin resistance and renal-adrenal function may also contribute to the protection and treatment of cardiovascular, metabolic and autonomic disorders. Exercise training also improves mental health, helps to prevent depression, and promotes or maintains positive self-esteem. Moderate-intensity exercise at least 30 minutes per day and at least 5 days per week is recommended for the vast majority of people. Supervised exercise training is preferable to maximize function capacity, and may be particularly important for patients with autonomic disorders.

  20. Development of a Commercially Viable, Modular Autonomous Robotic Systems for Converting any Vehicle to Autonomous Control

    NASA Technical Reports Server (NTRS)

    Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.

    1994-01-01

    A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.

  1. Autonomous Flight Safety System - Phase III

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.

  2. Autonomous microexplosives subsurface tracing system final report.

    SciTech Connect

    Engler, Bruce Phillip; Nogan, John; Melof, Brian Matthew; Uhl, James Eugene; Dulleck, George R., Jr.; Ingram, Brian V.; Grubelich, Mark Charles; Rivas, Raul R.; Cooper, Paul W.; Warpinski, Norman Raymond; Kravitz, Stanley H.

    2004-04-01

    The objective of the autonomous micro-explosive subsurface tracing system is to image the location and geometry of hydraulically induced fractures in subsurface petroleum reservoirs. This system is based on the insertion of a swarm of autonomous micro-explosive packages during the fracturing process, with subsequent triggering of the energetic material to create an array of micro-seismic sources that can be detected and analyzed using existing seismic receiver arrays and analysis software. The project included investigations of energetic mixtures, triggering systems, package size and shape, and seismic output. Given the current absence of any technology capable of such high resolution mapping of subsurface structures, this technology has the potential for major impact on petroleum industry, which spends approximately $1 billion dollar per year on hydraulic fracturing operations in the United States alone.

  3. A multilayer perceptron hazard detector for vision-based autonomous planetary landing

    NASA Astrophysics Data System (ADS)

    Lunghi, Paolo; Ciarambino, Marco; Lavagna, Michèle

    2016-07-01

    A hazard detection and target selection algorithm for autonomous spacecraft planetary landing, based on Artificial Neural Networks, is presented. From a single image of the landing area, acquired by a VIS camera during the descent, the system computes a hazard map, exploited to select the best target, in terms of safety, guidance constraints, and scientific interest. ANNs generalization properties allow the system to correctly operate also in conditions not explicitly considered during calibration. The net architecture design, training, verification and results are critically presented. Performances are assessed in terms of recognition accuracy and selected target safety. Results for a lunar landing scenario are discussed to highlight the effectiveness of the system.

  4. Mission planning for autonomous systems

    NASA Technical Reports Server (NTRS)

    Pearson, G.

    1987-01-01

    Planning is a necessary task for intelligent, adaptive systems operating independently of human controllers. A mission planning system that performs task planning by decomposing a high-level mission objective into subtasks and synthesizing a plan for those tasks at varying levels of abstraction is discussed. Researchers use a blackboard architecture to partition the search space and direct the focus of attention of the planner. Using advanced planning techniques, they can control plan synthesis for the complex planning tasks involved in mission planning.

  5. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  6. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-03-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  7. Inertial Navigation Theory (Autonomous Systems)

    DTIC Science & Technology

    1974-12-17

    TRANSLITERATION SYSTEM Block Ita lie Transliteration Block Ita lie Transliteration A a A a A. a P p F R, r 5 0 B $ B, b C c C S, s B ■ B » v. V T...component meters, structurally connected into a single block so that their axes of sensitivity form an orthogonal trihedron, are employed...the newtonometers are located, may be taken as it. 1.4.3. General principles of constructing inertial naviga- tional systems. A typical block diagram

  8. Why Computer-Based Systems Should be Autonomic

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, Mike

    2005-01-01

    The objective of this paper is to discuss why computer-based systems should be autonomic, where autonomicity implies self-managing, often conceptualized in terms of being self-configuring, self-healing, self-optimizing, self-protecting and self-aware. We look at motivations for autonomicity, examine how more and more systems are exhibiting autonomic behavior, and finally look at future directions.

  9. System for autonomous monitoring of bioagents

    SciTech Connect

    Langlois, Richard G.; Milanovich, Fred P.; Colston, Jr, Billy W.; Brown, Steve B.; Masquelier, Don A.; Mariella, Jr., Raymond P.; Venkateswaran, Kodomudi

    2015-06-09

    An autonomous monitoring system for monitoring for bioagents. A collector gathers the air, water, soil, or substance being monitored. A sample preparation means for preparing a sample is operatively connected to the collector. A detector for detecting the bioagents in the sample is operatively connected to the sample preparation means. One embodiment of the present invention includes confirmation means for confirming the bioagents in the sample.

  10. Lethality and Autonomous Systems: The Roboticist Demographic

    DTIC Science & Technology

    2008-01-01

    humanoid (22%), and other (23%); 9) Media Influence: only 18% said that media had a strong or very strong influence on their attitude to robots ...and whether certain emotions would be appropriate in a military robot . The Wars question was worded as follows: To what extent do you think ...Lethality and Autonomous Systems: The Roboticist Demographic Lilia V. Moshkina and Ronald C. Arkin Mobile Robot Laboratory, College of

  11. Autonomous omnidirectional spacecraft antenna system

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.

    1983-01-01

    The development of a low gain Electronically Switchable Spherical Array Antenna is discussed. This antenna provides roughly 7 dBic gain for receive/transmit operation between user satellites and the Tracking and Data Relay Satellite System. When used as a pair, the antenna provides spherical coverage. The antenna was tested in its primary operating modes: directed beam, retrodirective, and Omnidirectional.

  12. Sustainable and Autonomic Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter

    2006-01-01

    Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.

  13. Autonomous grain combine control system

    DOEpatents

    Hoskinson, Reed L.; Kenney, Kevin L.; Lucas, James R.; Prickel, Marvin A.

    2013-06-25

    A system for controlling a grain combine having a rotor/cylinder, a sieve, a fan, a concave, a feeder, a header, an engine, and a control system. The feeder of the grain combine is engaged and the header is lowered. A separator loss target, engine load target, and a sieve loss target are selected. Grain is harvested with the lowered header passing the grain through the engaged feeder. Separator loss, sieve loss, engine load and ground speed of the grain combine are continuously monitored during the harvesting. If the monitored separator loss exceeds the selected separator loss target, the speed of the rotor/cylinder, the concave setting, the engine load target, or a combination thereof is adjusted. If the monitored sieve loss exceeds the selected sieve loss target, the speed of the fan, the size of the sieve openings, or the engine load target is adjusted.

  14. Seizures and brain regulatory systems: Consciousness, sleep, and autonomic systems

    PubMed Central

    Sedigh-Sarvestani, Madineh; Blumenfeld, Hal; Loddenkemper, Tobias; Bateman, Lisa M

    2014-01-01

    Research into the physiological underpinnings of epilepsy has revealed reciprocal relationships between seizures and the activity of several regulatory systems in the brain, including those governing sleep, consciousness and autonomic functions. This review highlights recent progress in understanding and utilizing the relationships between seizures and the arousal or consciousness system, the sleep-wake and associated circadian system, and the central autonomic network. PMID:25233249

  15. Agent Technology, Complex Adaptive Systems, and Autonomic Systems: Their Relationships

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Rash, James; Rouff, Chistopher; Hincheny, Mike

    2004-01-01

    To reduce the cost of future spaceflight missions and to perform new science, NASA has been investigating autonomous ground and space flight systems. These goals of cost reduction have been further complicated by nanosatellites for future science data-gathering which will have large communications delays and at times be out of contact with ground control for extended periods of time. This paper describes two prototype agent-based systems, the Lights-out Ground Operations System (LOGOS) and the Agent Concept Testbed (ACT), and their autonomic properties that were developed at NASA Goddard Space Flight Center (GSFC) to demonstrate autonomous operations of future space flight missions. The paper discusses the architecture of the two agent-based systems, operational scenarios of both, and the two systems autonomic properties.

  16. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  17. The autonomic nervous system and perinatal metabolism.

    PubMed

    Milner, R D; De Gasparo, M

    1981-01-01

    The development of the autonomic nervous system in relation to perinatal metabolism is reviewed with particular attention given to the adipocyte, hepatocyte and the A and B cells of the islets of Langerhans. Adrenergic receptors develop in the B cell independently of normal innervation and by the time of birth, in most species studied, the pancreas, liver and adipose tissue respond appropriately to autonomic signals. Birth is associated with a huge surge in circulating catecholamines which is probably responsible for the early postnatal rise in free fatty acids and glucagon concentrations in plasma. beta-Blocking drugs such as propranolol have an adverse effect on fetal growth and neonatal metabolism, being responsible for hypoglycemia and for impairing the thermogenic response to cold exposure. beta-Mimetic drugs are commonly used to prevent premature labour and may help the fetus in other ways, for example, by improving the placental blood supply and the delivery of nutrients by increasing maternal fat and carbohydrate mobilization.

  18. Autonomic complications following central nervous system injury.

    PubMed

    Baguley, Ian J

    2008-11-01

    Severe sympathetic overactivity occurs in several conditions that are recognized as medical emergencies. Following central nervous system injury, a small proportion of individuals develop severe paroxysmal sympathetic and motor overactivity. These individuals have a high attendant risk of unnecessary secondary morbidity. Following acquired brain injury, the syndrome is known by a number of names including dysautonomia and sympathetic storm. Dysautonomia is currently a diagnosis of exclusion and often goes unrecognized. The evidence base for management is almost entirely anecdotal in nature; there has been little structured or prospective research. In contrast, the evidence base for autonomic dysreflexia following spinal cord injury is much stronger, with level 1 evidence for many treatment interventions. This review presents a current understanding of each condition and suggests simple management protocols. With the marked disparity in the literature for the two conditions, the main focus is on the literature for dysautonomia. The similarity between these two conditions and the other autonomic emergency conditions is discussed.

  19. Autonomous Flight Safety System Road Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.; Zoemer, Roger D.; Forney, Chris S.

    2005-01-01

    On February 3, 2005, Kennedy Space Center (KSC) conducted the first Autonomous Flight Safety System (AFSS) test on a moving vehicle -- a van driven around the KSC industrial area. A subset of the Phase III design was used consisting of a single computer, GPS receiver, and UPS antenna. The description and results of this road test are described in this report.AFSS is a joint KSC and Wallops Flight Facility project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations.

  20. Development of an Autonomous Pathogen Detection System

    SciTech Connect

    Langlosi, S.; Brown, S.; Colston, B.; Jones, L.; Masquelier, D.; Meyer, P.; McBride, M.; Nasarabad, S.; Ramponi, A.J.; Venkatseswarm, K.; Milanovich, F.

    2000-10-12

    An Autonomous Pathogen Detection System (APDS) is being designed and evaluated for use in domestic counter-terrorism. The goal is a fully automated system that utilizes both flow cytometry and polymerase chain reaction (PCR) to continuously monitor the air for BW pathogens in major buildings or high profile events. A version 1 APDS system consisting of an aerosol collector, a sample preparation subsystem, and a flow cytometer for detecting the antibody-labeled target organisms has been completed and evaluated. Improved modules are under development for a version 2 APDS including a Lawrence Livermore National Laboratory-designed aerosol preconcentrator, a multiplex flow cytometer, and a flow-through PCR detector.

  1. A Design Methodology For Industrial Vision Systems

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Waltz, F. M.; Snyder, M. A.

    1988-11-01

    The cost of design, rather than that of target system hardware, represents the principal factor inhibiting the adoption of machine vision systems by manufacturing industry. To reduce design costs to a minimum, a number of software and hardware aids have been developed or are currently being built by the authors. These design aids are as follows: a. An expert system for giving advice about which image acquisition techniques (i.e. lighting/viewing techniques) might be appropriate in a given situation. b. A program to assist in the selection and setup of camera lenses. c. A rich repertoire of image processing procedures, integrated with the Al language Prolog. This combination (called ProVision) provides a facility for experimenting with intelligent image processing techniques and is intended to allow rapid prototyping of algorithms and/or heuristics. d. Fast image processing hardware, capable of implementing commands in the ProVision language. The speed of operation of this equipment is sufficiently high for it to be used, without modification, in many industrial applications. Where this is not possible, even higher execution speed may be achieved by adding extra modules to the processing hardware. In this way, it is possible to trade speed against the cost of the target system hardware. New and faster implementations of a given algorithm/heuristic can usually be achieved with the expenditure of only a small effort. Throughout this article, the emphasis is on designing an industrial vision system in a smooth and effortless manner. In order to illustrate our main thesis that the design of industrial vision systems can be made very much easier through the use of suitable utilities, the article concludes with a discussion of a case study: the dissection of tiny plants using a visually controlled robot.

  2. Vision-based map building and trajectory planning to enable autonomous flight through urban environments

    NASA Astrophysics Data System (ADS)

    Watkins, Adam S.

    The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage

  3. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  4. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  5. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  6. Missileborne artificial vision system (MAVIS)

    NASA Astrophysics Data System (ADS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-03-01

    The Naval Air Warfare Center, China Lake has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a Companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera and to other COHO boards. The system is designed to have multiple SIMD machines each performing different Corticomorphic functions. The system level software has been developed which allows a high level description of Corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  7. Verification of Autonomous Systems for Space Applications

    NASA Technical Reports Server (NTRS)

    Brat, G.; Denney, E.; Giannakopoulou, D.; Frank, J.; Jonsson, A.

    2006-01-01

    Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.

  8. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  9. Physiology of the Autonomic Nervous System

    PubMed Central

    2007-01-01

    This manuscript discusses the physiology of the autonomic nervous system (ANS). The following topics are presented: regulation of activity; efferent pathways; sympathetic and parasympathetic divisions; neurotransmitters, their receptors and the termination of their activity; functions of the ANS; and the adrenal medullae. In addition, the application of this material to the practice of pharmacy is of special interest. Two case studies regarding insecticide poisoning and pheochromocytoma are included. The ANS and the accompanying case studies are discussed over 5 lectures and 2 recitation sections during a 2-semester course in Human Physiology. The students are in the first-professional year of the doctor of pharmacy program. PMID:17786266

  10. Malicious Hubs: Detecting Abnormally Malicious Autonomous Systems

    SciTech Connect

    Kalafut, Andrew J.; Shue, Craig A; Gupta, Prof. Minaxi

    2010-01-01

    While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using 12 popular blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted and others account for large fractions of blacklisted IPs. Overall, we conclude that examining malicious activity at the AS granularity can unearth networks with lax security or those that harbor cybercrime.

  11. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  12. Design of optimal correlation filters for hybrid vision systems

    NASA Technical Reports Server (NTRS)

    Rajan, Periasamy K.

    1990-01-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  13. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation

    PubMed Central

    Peli, Eli; Luo, Gang; Bowers, Alex; Rensing, Noa

    2007-01-01

    Vision loss typically affects either the wide peripheral vision (important for mobility), or central vision (important for seeing details). Traditional optical visual aids usually recover the lost visual function, but at a high cost for the remaining visual function. We have developed a novel concept of vision-multiplexing using augmented vision head-mounted display systems to address vision loss. Two applications are discussed in this paper. In the first, minified edge images from a head-mounted video camera are presented on a see-through display providing visual field expansion for people with peripheral vision loss, while still enabling the full resolution of the residual central vision to be maintained. The concept has been applied in daytime and nighttime devices. A series of studies suggested that the system could help with visual search, obstacle avoidance, and nighttime mobility. Subjects were positive in their ratings of device cosmetics and ergonomics. The second application is for people with central vision loss. Using an on-axis aligned camera and display system, central visibility is enhanced with 1:1 scale edge images, while still enabling the wide field of the unimpaired peripheral vision to be maintained. The registration error of the system was found to be low in laboratory testing. PMID:18172511

  14. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  15. The MAP Autonomous Mission Control System

    NASA Technical Reports Server (NTRS)

    Breed, Juile; Coyle, Steven; Blahut, Kevin; Dent, Carolyn; Shendock, Robert; Rowe, Roger

    2000-01-01

    The Microwave Anisotropy Probe (MAP) mission is the second mission in NASA's Office of Space Science low-cost, Medium-class Explorers (MIDEX) program. The Explorers Program is designed to accomplish frequent, low cost, high quality space science investigations utilizing innovative, streamlined, efficient management, design and operations approaches. The MAP spacecraft will produce an accurate full-sky map of the cosmic microwave background temperature fluctuations with high sensitivity and angular resolution. The MAP spacecraft is planned for launch in early 2001, and will be staffed by only single-shift operations. During the rest of the time the spacecraft must be operated autonomously, with personnel available only on an on-call basis. Four (4) innovations will work cooperatively to enable a significant reduction in operations costs for the MAP spacecraft. First, the use of a common ground system for Spacecraft Integration and Test (I&T) as well as Operations. Second, the use of Finite State Modeling for intelligent autonomy. Third, the integration of a graphical planning engine to drive the autonomous systems without an intermediate manual step. And fourth, the ability for distributed operations via Web and pager access.

  16. Autonomic nervous system dysregulation in pediatric hypertension.

    PubMed

    Feber, Janusz; Ruzicka, Marcel; Geier, Pavel; Litwin, Mieczyslaw

    2014-05-01

    Historically, primary hypertension (HTN) has been prevalent typically in adults. Recent data however, suggests an increasing number of children diagnosed with primary HTN, mainly in the setting of obesity. One of the factors considered in the etiology of HTN is the autonomous nervous system, namely its dysregulation. In the past, the sympathetic nervous system (SNS) was regarded as a system engaged mostly in buffering major acute changes in blood pressure (BP), in response to physical and emotional stressors. Recent evidence suggests that the SNS plays a much broader role in the regulation of BP, including the development and maintenance of sustained HTN by a chronically elevated central sympathetic tone in adults and children with central/visceral obesity. Consequently, attempts have been made to reduce the SNS hyperactivity, in order to intervene early in the course of the disease and prevent HTN-related complications later in life.

  17. Seizures and brain regulatory systems: consciousness, sleep, and autonomic systems.

    PubMed

    Sedigh-Sarvestani, Madineh; Blumenfeld, Hal; Loddenkemper, Tobias; Bateman, Lisa M

    2015-06-01

    Research into the physiologic underpinnings of epilepsy has revealed reciprocal relationships between seizures and the activity of several regulatory systems in the brain. This review highlights recent progress in understanding and using the relationships between seizures and the arousal or consciousness system, the sleep-wake and associated circadian system, and the central autonomic network.

  18. A Proposal of Autonomous Robotic Systems Educative Environment

    NASA Astrophysics Data System (ADS)

    Ierache, Jorge; Garcia-Martinez, Ramón; de Giusti, Armando

    This work presents our experiences in the implementation of a laboratory of autonomous robotic systems applied to the training of beginner and advanced students doing a degree course in Computer Engineering., taking into account the specific technologies, robots, autonomous toys, and programming languages. They provide a strategic opportunity for human resources formation by involving different aspects which range from the specification elaboration, modeling, software development and implementation and testing of an autonomous robotic system.

  19. Intelligent data reduction for autonomous power systems

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1988-01-01

    Since 1984 Marshall Space Flight Center was actively engaged in research and development concerning autonomous power systems. Much of the work in this domain has dealt with the development and application of knowledge-based or expert systems to perform tasks previously accomplished only through intensive human involvement. One such task is the health status monitoring of electrical power systems. Such monitoring is a manpower intensive task which is vital to mission success. The Hubble Space Telescope testbed and its associated Nickel Cadmium Battery Expert System (NICBES) were designated as the system on which the initial proof of concept for intelligent power system monitoing will be established. The key function performed by an engineer engaged in system monitoring is to analyze the raw telemetry data and identify from the whole only those elements which can be considered significant. This function requires engineering expertise on the functionality of the system, the mode of operation and the efficient and effective reading of the telemetry data. Application of this expertise to extract the significant components of the data is referred to as data reduction. Such a function possesses characteristics which make it a prime candidate for the application of knowledge-based systems' technologies. Such applications are investigated and recommendations are offered for the development of intelligent data reduction systems.

  20. A demonstration of autonomous navigation and machine vision using the HERMIES-IIB robot

    SciTech Connect

    Burks, B.L.; Barnett, D.L.; Jones, J.P.; Killough, S.M.

    1987-01-01

    In this paper, advances to our mobile robot series (currently HERMIES-IIB) to include 8 NCUBE processors on-board, (computationally equivalent to 8 Vax 11/780's) operating in parallel, and augmentation of the sensor suite with cameras to facilitate on-board vision analysis and goal finding are described. The essential capabilities of the expert system described in earlier papers have been ported to the on-board HERMIES-IIB computers thereby eliminating off-board computation. A successful experiment is described in which a robot is placed in an initial arbitrary location without prior specification of the room contents, successfully discovers and navigates around stationary and moving obstacles, picks up and moves small obstacles, searches for a control panel, and reads the meters found on the panel. 19 refs., 5 figs.

  1. Computation and design of autonomous intelligent systems

    NASA Astrophysics Data System (ADS)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  2. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  3. Design and evaluation of an autonomous, obstacle avoiding, flight control system using visual sensors

    NASA Astrophysics Data System (ADS)

    Crawford, Bobby Grant

    In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.

  4. The nature of the autonomic dysfunction in multiple system atrophy

    NASA Technical Reports Server (NTRS)

    Parikh, Samir M.; Diedrich, Andre; Biaggioni, Italo; Robertson, David

    2002-01-01

    The concept that multiple system atrophy (MSA, Shy-Drager syndrome) is a disorder of the autonomic nervous system is several decades old. While there has been renewed interest in the movement disorder associated with MSA, two recent consensus statements confirm the centrality of the autonomic disorder to the diagnosis. Here, we reexamine the autonomic pathophysiology in MSA. Whereas MSA is often thought of as "autonomic failure", new evidence indicates substantial persistence of functioning sympathetic and parasympathetic nerves even in clinically advanced disease. These findings help explain some of the previously poorly understood features of MSA. Recognition that MSA entails persistent, constitutive autonomic tone requires a significant revision of our concepts of its diagnosis and therapy. We will review recent evidence bearing on autonomic tone in MSA and discuss their therapeutic implications, particularly in terms of the possible development of a bionic baroreflex for better control of blood pressure.

  5. Mechanical deployment system on aries an autonomous mobile robot

    SciTech Connect

    Rocheleau, D.N.

    1995-12-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail.

  6. Autonomous Robot System for Sensor Characterization

    SciTech Connect

    David Bruemmer; Douglas Few; Frank Carney; Miles Walton; Heather Hunting; Ron Lujan

    2004-03-01

    This paper discusses an innovative application of new Markov localization techniques that combat the problem of odometry drift, allowing a novel control architecture developed at the Idaho National Engineering and Environmental Laboratory (INEEL) to be utilized within a sensor characterization facility developed at the Remote Sensing Laboratory (RSL) in Nevada. The new robotic capability provided by the INEEL will allow RSL to test and evaluate a wide variety of sensors including radiation detection systems, machine vision systems, and sensors that can detect and track heat sources (e.g. human bodies, machines, chemical plumes). By accurately moving a target at varying speeds along designated paths, the robotic solution allows the detection abilities of a wide variety of sensors to be recorded and analyzed.

  7. Autonomous Formations of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Dhali, Sanjana; Joshi, Suresh M.

    2013-01-01

    Autonomous formation control of multi-agent dynamic systems has a number of applications that include ground-based and aerial robots and satellite formations. For air vehicles, formation flight ("flocking") has the potential to significantly increase airspace utilization as well as fuel efficiency. This presentation addresses two main problems in multi-agent formations: optimal role assignment to minimize the total cost (e.g., combined distance traveled by all agents); and maintaining formation geometry during flock motion. The Kuhn-Munkres ("Hungarian") algorithm is used for optimal assignment, and consensus-based leader-follower type control architecture is used to maintain formation shape despite the leader s independent movements. The methods are demonstrated by animated simulations.

  8. Ball stud inspection system using machine vision.

    PubMed

    Shin, Dongik; Han, Changsoo; Moon, Young Shik

    2002-01-01

    In this paper, a vision-based inspection system that measures the dimensions of a ball stud is designed and implemented. The system acquires silhouetted images by backlighting and extracts the outlines of the nearly dichotomized images in subpixel accuracy. The sets of boundary data are modeled with reasonable geometric primitives and the parameters of the models are estimated in a manner that minimizes error. Jig-fixtures and servo systems for the inspection are also contrived. The system rotates an inspected object to recognize the objects in space not on a plane. The system moves the object vertically so that it may take several pictures of different parts of the object, resulting in improvement of measuring resolution. The performance of the system is evaluated by measurement of the dimensions of a standard ball, a standard cylinder, and a ball stud.

  9. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  10. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  11. Modular control systems for teleoperated and autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Kadonoff, Mark B.; Parish, David W.

    1995-01-01

    This paper will discuss components of a modular hardware and software architecture for mobile robots that supports both teleoperation and autonomous control. The Modular Autonomous Robot System architecture enables rapid development of control systems for unmanned vehicles for a wide variety of commercial and military applications.

  12. From Automation to Autonomy-Trends Towards Autonomous Combat Systems

    DTIC Science & Technology

    2000-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO10300 TITLE: From Automation to Autonomy-Trends Towards Autonomous ...Systems Concepts and Integration. [les Avancees en concepts systemes pour vehicules et en integration] To order the complete compilation report, use...part numbers comprise the compilation report: ADPO10300 thru ADP010339 UNCLASSIFIED K3-1 FROM AUTOMATION TO AUTONOMY -TRENDS TOWARDS AUTONOMOUS

  13. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  14. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  15. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing... importation of certain machine vision software, machine vision systems, or products containing same by reason... Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd. of Japan and Fuji America Corporation of...

  16. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  17. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  18. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  19. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  20. Digital Autonomous Terminal Access Communication (DATAC) system

    NASA Technical Reports Server (NTRS)

    Novacki, Stanley M., III

    1987-01-01

    In order to accommodate the increasing number of computerized subsystems aboard today's more fuel efficient aircraft, the Boeing Co. has developed the DATAC (Digital Autonomous Terminal Access Control) bus to minimize the need for point-to-point wiring to interconnect these various systems, thereby reducing total aircraft weight and maintaining an economical flight configuration. The DATAC bus is essentially a local area network providing interconnections for any of the flight management and control systems aboard the aircraft. The task of developing a Bus Monitor Unit was broken down into four subtasks: (1) providing a hardware interface between the DATAC bus and the Z8000-based microcomputer system to be used as the bus monitor; (2) establishing a communication link between the Z8000 system and a CP/M-based computer system; (3) generation of data reduction and display software to output data to the console device; and (4) development of a DATAC Terminal Simulator to facilitate testing of the hardware and software which transfer data between the DATAC's bus and the operator's console in a near real time environment. These tasks are briefly discussed.

  1. Autonomous Control of Space Reactor Systems

    SciTech Connect

    Belle R. Upadhyaya; K. Zhao; S.R.P. Perillo; Xiaojia Xu; M.G. Na

    2007-11-30

    Autonomous and semi-autonomous control is a key element of space reactor design in order to meet the mission requirements of safety, reliability, survivability, and life expectancy. Interrestrial nuclear power plants, human operators are avilable to perform intelligent control functions that are necessary for both normal and abnormal operational conditions.

  2. Systems Architecture for Fully Autonomous Space Missions

    NASA Technical Reports Server (NTRS)

    Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)

    2002-01-01

    The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software

  3. DLP™-based dichoptic vision test system

    PubMed Central

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3%; remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer’s sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events. PMID:20210457

  4. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  5. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    NASA Astrophysics Data System (ADS)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  6. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  7. [Neuropeptide Y and autonomic nervous system].

    PubMed

    Nozdrachev, A D; Masliukov, P M

    2011-01-01

    Neuropeptide Y (NPY) containing 36 amino acid residues belongs to peptides widely spread in the central and peripheral nervous system. NPY and its receptors play an extremely diverse role in the nervous system, including regulation of satiety, of emotional state, of vascular tone, and of gastrointestinal secretion. In mammals, NPY has been revealed in the majority of sympathetic ganglion neurons, in a high number of neurons of parasympathetic cranial ganglia as well as of intramural ganglia of the metasympathetic nervous system. At present, six types of receptors to NPY (Y1-Y6) have been identified. All receptors to NPY belong to the family of G-bound proteins. Action of NPY on peripheral organs-targets is predominantly realized through postsynaptic receptors Y1, Y3-Y5, and presynaptic receptors of the Y2 type. NPY is present in large electron-dense vesicles and is released at high-frequency stimulation. NPY affects not only vascular tone, frequency and strength of heart contractions, motorics and secretion of the gastrointestinal tract, but also has trophic effect and produces proliferation of cells of organs-targets, specifically of vessels, myocardium, and adipose tissue. In early postnatal ontogenesis the percent of the NPY-containing neurons in ganglia of the autonomic nervous system increases. In adult organisms, this parameter decreases. This seems to be connected with the trophic NPY effect on cells-targets as well as with regulation of their functional state.

  8. Closed-loop autonomous docking system

    NASA Technical Reports Server (NTRS)

    Dabney, Richard W. (Inventor); Howard, Richard T. (Inventor)

    1992-01-01

    An autonomous docking system is provided which produces commands for the steering and propulsion system of a chase vehicle used in the docking of that chase vehicle with a target vehicle. The docking system comprises a passive optical target affixed to the target vehicle and comprising three reflective areas including a central area mounted on a short post, and tracking sensor and process controller apparatus carried by the chase vehicle. The latter apparatus comprises a laser diode array for illuminating the target so as to cause light to be reflected from the reflective areas of the target; a sensor for detecting the light reflected from the target and for producing an electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal and for producing, based thereon, output signals relating to the relative range, roll, pitch, yaw, azimuth, and elevation of the chase and target vehicles; and a docking process controller, responsive to the output signals produced by the signal processor, for producing command signals for controlling the steering and propulsion system of the chase vehicle.

  9. Active State Model for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Park, Han; Chien, Steve; Zak, Michail; James, Mark; Mackey, Ryan; Fisher, Forest

    2003-01-01

    The concept of the active state model (ASM) is an architecture for the development of advanced integrated fault-detection-and-isolation (FDI) systems for robotic land vehicles, pilotless aircraft, exploratory spacecraft, or other complex engineering systems that will be capable of autonomous operation. An FDI system based on the ASM concept would not only provide traditional diagnostic capabilities, but also integrate the FDI system under a unified framework and provide mechanism for sharing of information between FDI subsystems to fully assess the overall health of the system. The ASM concept begins with definitions borrowed from psychology, wherein a system is regarded as active when it possesses self-image, self-awareness, and an ability to make decisions itself, such that it is able to perform purposeful motions and other transitions with some degree of autonomy from the environment. For an engineering system, self-image would manifest itself as the ability to determine nominal values of sensor data by use of a mathematical model of itself, and selfawareness would manifest itself as the ability to relate sensor data to their nominal values. The ASM for such a system may start with the closed-loop control dynamics that describe the evolution of state variables. As soon as this model was supplemented with nominal values of sensor data, it would possess self-image. The ability to process the current sensor data and compare them with the nominal values would represent self-awareness. On the basis of self-image and self-awareness, the ASM provides the capability for self-identification, detection of abnormalities, and self-diagnosis.

  10. Lightweight autonomous chemical identification system (LACIS)

    NASA Astrophysics Data System (ADS)

    Lozos, George; Lin, Hai; Burch, Timothy

    2012-06-01

    Smiths Detection and Intelligent Optical Systems have developed prototypes for the Lightweight Autonomous Chemical Identification System (LACIS) for the US Department of Homeland Security. LACIS is to be a handheld detection system for Chemical Warfare Agents (CWAs) and Toxic Industrial Chemicals (TICs). LACIS is designed to have a low limit of detection and rapid response time for use by emergency responders and could allow determination of areas having dangerous concentration levels and if protective garments will be required. Procedures for protection of responders from hazardous materials incidents require the use of protective equipment until such time as the hazard can be assessed. Such accurate analysis can accelerate operations and increase effectiveness. LACIS is to be an improved point detector employing novel CBRNE detection modalities that includes a militaryproven ruggedized ion mobility spectrometer (IMS) with an array of electro-resistive sensors to extend the range of chemical threats detected in a single device. It uses a novel sensor data fusion and threat classification architecture to interpret the independent sensor responses and provide robust detection at low levels in complex backgrounds with minimal false alarms. The performance of LACIS prototypes have been characterized in independent third party laboratory tests at the Battelle Memorial Institute (BMI, Columbus, OH) and indoor and outdoor field tests at the Nevada National Security Site (NNSS). LACIS prototypes will be entering operational assessment by key government emergency response groups to determine its capabilities versus requirements.

  11. APDS: The Autonomous Pathogen Detection System

    SciTech Connect

    Hindson, B; Makarewicz, A; Setlur, U; Henderer, B; McBride, M; Dzenitis, J

    2004-10-04

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic-acid based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for seven days in a major U.S. transportation hub is reported.

  12. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  13. Autonomous and Autonomic Systems: A Paradigm for Future Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walter F.; Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    NASA increasingly will rely on autonomous systems concepts, not only in the mission control centers on the ground, but also on spacecraft and on rovers and other assets on extraterrestrial bodies. Automomy enables not only reduced operations costs, But also adaptable goal-driven functionality of mission systems. Space missions lacking autonomy will be unable to achieve the full range of advanced mission objectives, given that human control under dynamic environmental conditions will not be feasible due, in part, to the unavoidably high signal propagation latency and constrained data rates of mission communications links. While autonomy cost-effectively supports accomplishment of mission goals, autonomicity supports survivability of remote mission assets, especially when human tending is not feasible. Autonomic system properties (which ensure self-configuring, self-optimizing self-healing, and self-protecting behavior) conceptually may enable space missions of a higher order into any previously flown. Analysis of two NASA agent-based systems previously prototyped, and of a proposed future mission involving numerous cooperating spacecraft, illustrates how autonomous and autonomic system concepts may be brought to bear on future space missions.

  14. Automatic Welding System Using Speed Controllable Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Kim, Taewon; Suto, Takeshi; Kobayashi, Junya; Kim, Jongcheol; Suga, Yasuo

    A prototype of autonomous mobile robot with two vision sensors for automatic welding of steel plates was constructed. The robot can move straight, steer and turn around the robot center by controlling the driving speed of the two wheels respectively. At the tip of the movable arm, two CCD cameras are fixed. A local camera observes the welding line near the welding torch and another wide camera observes relatively wide area in front of the welding part. The robot controls the traveling speed in accordance with the shape of the welding line. In the case of straight welding line, the speed of the robot is accelerated and the welding efficiency is improved. However, if the robot finds a corner of welding line, the speed is decelerated in order to realize the precise seam tracking and stable welding. Therefore, the robot can realize precise and high speed seam-tracking by controlling the travel speed. The effectiveness of the control system is confirmed by welding experiments.

  15. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    DTIC Science & Technology

    2015-09-01

    OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples...Master’s Thesis 4. TITLE AND SUBTITLE UTILIZING ROBOT OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL 5. FUNDING NUMBERS 6. AUTHOR(S) Lum, Joshua S...release; distribution is unlimited UTILIZING ROBOT OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL Joshua S. Lum Captain, United States

  16. Cardiac autonomic nervous system activity in obesity.

    PubMed

    Liatis, Stavros; Tentolouris, Nikolaos; Katsilambros, Nikolaos

    2004-08-01

    The development of obesity is caused by a disturbance of energy balance, with energy intake exceeding energy expenditure. As the autonomic nervous system (ANS) has a role in the regulation of both these variables, it has become a major focus of investigation in the fields of obesity pathogenesis. The enhanced cardiac sympathetic drive shown in most of the studies in obese persons might be due to an increase in their levels of circulating insulin. The role of leptin needs further investigation with studies in humans. There is a blunted response of the cardiac sympathetic nervous system (SNS) activity in obese subjects after consumption of a carbohydrate-rich meal as well as after insulin administration. This might be due to insulin resistance. It is speculated that increased SNS activity in obesity may contribute to the development of hypertension in genetically susceptible individuals. It is also speculated that the increase in cardiac SNS activity under fasting conditions in obesity may be associated with high cardiovascular morbidity and mortality.

  17. Implementation of Deconfliction in Multivehicle Autonomous Systems

    DTIC Science & Technology

    2010-01-01

    two fin -actuated vehicles was replaced with a remote control toy shark controlled by a human operator. The human operator drove the toy shark directly...Fig. 4 Vehicle Swarm Technology Laboratory (VSTL) developed by the Boeing Research and Technology group. 3.2 University of Washington Fin -Actuated...Autonomous Underwater Vehicles The UW testbed is composed of a set of three fin -actuated autonomous underwater vehi- cles (Fig. 6) operating in a

  18. Differential responses of components of the autonomic nervous system.

    PubMed

    Goldstein, David S

    2013-01-01

    This chapter conveys several concepts and points of view about the scientific and medical significance of differential alterations in activities of components of the autonomic nervous system in stress and disease. The use of terms such as "the autonomic nervous system," "autonomic failure," "dysautonomia," and "autonomic dysfunction" imply the existence of a single entity; however, the autonomic nervous system has functionally and neurochemically distinctive components, which are reflected in differential responses to stressors and differential involvement in pathophysiologic states. One can conceptualize the autonomic nervous system as having at least five components: the sympathetic noradrenergic system, the sympathetic cholinergic system, the parasympathetic cholinergic system, the sympathetic adrenergic system, and the enteric nervous system. Evidence has accumulated for differential noradrenergic vs. adrenergic responses in various situations. The largest sympathetic adrenergic system responses are seen when the organism encounters stressors that pose a global or metabolic threat. Sympathetic noradrenergic system activation dominates the responses to orthostasis, moderate exercise, and exposure to cold, whereas sympathetic adrenergic system activation dominates those to glucoprivation and emotional distress. There seems to be at least as good a justification for the concept of coordinated adrenocortical-adrenomedullary responses as for coordinated adrenomedullary-sympathoneural responses in stress. Fainting reactions involve differential adrenomedullary hormonal vs. sympathetic noradrenergic activation. Parkinson disease entails relatively selective dysfunction of the sympathetic noradrenergic system, with prominent loss of noradrenergic nerves in the heart, yet normal adrenomedullary function. Allostatic load links stress with degenerative diseases, and Parkinson disease may be a disease of the elderly because of allostatic load.

  19. Cardiac autonomic profile in rheumatoid arthritis and systemic lupus erythematosus.

    PubMed

    Aydemir, M; Yazisiz, V; Basarici, I; Avci, A B; Erbasan, F; Belgi, A; Terzioglu, E

    2010-03-01

    Neurological involvement is a well-documented issue in patients with systemic lupus erythematosus (SLE) and rheumatoid arthritis (RA). However, little is known about the involvement of the autonomic nervous system. This study was conducted to investigate autonomic nervous system dysfunction in patients with RA and SLE. Twenty-six RA patients, 38 SLE patients and 40 healthy controls were recruited from our in- and out-patient departments. Heart rate variability (HRV) parameters (the power of the high- [HF] and low-frequency [LF] band of haemodynamic time series, the ratio between low- and high-frequency components [LF/HF ratio], the power spectral density), baroreflex sensitivity (BRS) and beat-to-beat blood pressures were assessed by a novel non-invasive haemodynamic monitoring tool (Task Force Monitor [TFM], CNSystems Medizintechnik GmbH, Graz, Austria). Autonomic nervous system dysfunction was determined according to classical Ewing autonomic test battery. Furthermore, we implemented a secondary autonomic test score by modifying the Ewing test battery with additional criteria. Both the classical and modified Ewing test batteries have revealed that the frequencies of autonomic neuropathy were significantly higher in patient groups compared with controls (p < 0.001). Evaluation by TFM revealed that deterioration of sophisticated autonomic parameters (such as HRV and BRS) were more pronounced in the patient groups compared with controls. There was a significant association between BRS and Ewing test scores and abnormal BRS results were more frequent in patients with autonomic dysfunction according to Ewing test batteries. No relation was found between autonomic neuropathy and disease duration, disease activity and autoantibody positivity. Consequently, we believe that further large-scale studies investigating cardiovascular autonomic neuropathy in rheumatic diseases should be carried out to verify our findings and manifest clinical consequences beyond these results.

  20. The Autonomous Pathogen Detection System (APDS)

    SciTech Connect

    Morris, J; Dzenitis, J

    2004-09-22

    Shaped like a mailbox on wheels, it's been called a bioterrorism ''smoke detector.'' It can be found in transportation hubs such as airports and subways, and it may be coming to a location near you. Formally known as the Autonomous Pathogen Detection System, or APDS, this latest tool in the war on bioterrorism was developed at Lawrence Livermore National Laboratory to continuously sniff the air for airborne pathogens and toxins such as anthrax or plague. The APDS is the modern day equivalent of the canaries miners took underground with them to test for deadly carbon dioxide gas. But this canary can test for numerous bacteria, viruses, and toxins simultaneously, report results every hour, and confirm positive samples and guard against false positive results by using two different tests. The fully automated system collects and prepares air samples around the clock, does the analysis, and interprets the results. It requires no servicing or human intervention for an entire week. Unlike its feathered counterpart, when an APDS unit encounters something deadly in the air, that's when it begins singing, quietly. The APDS unit transmits a silent alert and sends detailed data to public health authorities, who can order evacuation and begin treatment of anyone exposed to toxic or biological agents. It is the latest in a series of biodefense detectors developed at DOE/NNSA national laboratories. The manual predecessor to APDS, called BASIS (for Biological Aerosol Sentry and Information System), was developed jointly by Los Alamos and Lawrence Livermore national laboratories. That system was modified to become BioWatch, the Department of Homeland Security's biological urban monitoring program. A related laboratory instrument, the Handheld Advanced Nucleic Acid Analyzer (HANAA), was first tested successfully at LLNL in September 1997. Successful partnering with private industry has been a key factor in the rapid advancement and deployment of biodefense instruments such as these

  1. Vision-based augmented reality system

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan

    2003-04-01

    The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.

  2. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  3. Flight test comparison between enhanced vision (FLIR) and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-05-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA"s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  4. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  5. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  6. Nuclear bimodal new vision solar system missions

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision {open_quote}{open_quote}constellation{close_quote}{close_quote} type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. {copyright} {ital 1996 American Institute of Physics.}

  7. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  8. An Expert Vision System for Autonomous Land Vehicle Road Following.

    DTIC Science & Technology

    1988-01-01

    TR-138, Center for Automa- tioii Hesearch, University of Maryland, July 1985. ’Miinskyl Minsky , Marvin , "A Framework for Representing Knowledge", in...relationships, frames have been chosen to model objects , Minsky ]. A frame is a data structure containing a set of slots (or attributes) which en- capsulate

  9. Physics Simulation Software for Autonomous Propellant Loading and Gas House Autonomous System Monitoring

    NASA Technical Reports Server (NTRS)

    Regalado Reyes, Bjorn Constant

    2015-01-01

    1. Kennedy Space Center (KSC) is developing a mobile launching system with autonomous propellant loading capabilities for liquid-fueled rockets. An autonomous system will be responsible for monitoring and controlling the storage, loading and transferring of cryogenic propellants. The Physics Simulation Software will reproduce the sensor data seen during the delivery of cryogenic fluids including valve positions, pressures, temperatures and flow rates. The simulator will provide insight into the functionality of the propellant systems and demonstrate the effects of potential faults. This will provide verification of the communications protocols and the autonomous system control. 2. The High Pressure Gas Facility (HPGF) stores and distributes hydrogen, nitrogen, helium and high pressure air. The hydrogen and nitrogen are stored in cryogenic liquid state. The cryogenic fluids pose several hazards to operators and the storage and transfer equipment. Constant monitoring of pressures, temperatures and flow rates are required in order to maintain the safety of personnel and equipment during the handling and storage of these commodities. The Gas House Autonomous System Monitoring software will be responsible for constantly observing and recording sensor data, identifying and predicting faults and relaying hazard and operational information to the operators.

  10. Range gated active night vision system for automobiles.

    PubMed

    David, Ofer; Kopeika, Norman S; Weizer, Boaz

    2006-10-01

    Night vision for automobiles is an emerging safety feature that is being introduced for automotive safety. We develop what we believe is an innovative new night vision system using gated imaging principles. The concept of gated imaging is described and its basic advantages, including the backscatter reduction mechanism for improved vision through fog, rain, and snow. Evaluation of performance is presented by analyzing bar pattern modulation and comparing Johnson chart predictions.

  11. Real-Time Trajectory Generation for Autonomous Nonlinear Flight Systems

    DTIC Science & Technology

    2006-04-01

    Real-Time Trajectory Generation for Autonomous Nonlinear Flight Systems AF02T002 Phase II Final Report Contract No. FA9550-04-C-0032 Principal...3. REPORT TYPE AND DATES COVERED Final Report for 14 April 2004-14 April 2006 Real-Time Trajectory Generation for Autonomous Nonlinear Flight...A 13. ABSTRACT (Maximum 200 Words) Unmanned aerial vehicle and smart munition systems need robust, real-time path generation and

  12. Intelligent Computer Vision System for Automated Classification

    NASA Astrophysics Data System (ADS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  13. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  14. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  15. Networked vision system using a Prolog controller

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Caton, S. J.; Chatburn, L. T.; Crowther, R. A.; Miller, J. W. V.

    2005-11-01

    Prolog offers a very different style of programming compared to conventional languages; it can define object properties and abstract relationships in a way that Java, C, C++, etc. find awkward. In an accompanying paper, the authors describe how a distributed web-based vision systems can be built using elements that may even be located on different continents. One particular system of this general type is described here. The top-level controller is a Prolog program, which operates one, or more, image processing engines. This type of function is natural to Prolog, since it is able to reason logically using symbolic (non-numeric) data. Although Prolog is not suitable for programming image processing functions directly, it is ideal for analysing the results derived by an image processor. This article describes the implementation of two systems, in which a Prolog program controls several image processing engines, a simple robot, a pneumatic pick-and-place arm), LED illumination modules and a various mains-powered devices.

  16. Feeling good: autonomic nervous system responding in five positive emotions.

    PubMed

    Shiota, Michelle N; Neufeld, Samantha L; Yeung, Wan H; Moser, Stephanie E; Perea, Elaine F

    2011-12-01

    Although dozens of studies have examined the autonomic nervous system (ANS) aspects of negative emotions, less is known about ANS responding in positive emotion. An evolutionary framework was used to define five positive emotions in terms of fitness-enhancing function, and to guide hypotheses regarding autonomic responding. In a repeated measures design, participants viewed sets of visual images eliciting these positive emotions (anticipatory enthusiasm, attachment love, nurturant love, amusement, and awe) plus an emotionally neutral state. Peripheral measures of sympathetic and vagal parasympathetic activation were assessed. Results indicated that the emotion conditions were characterized by qualitatively distinct profiles of autonomic activation, suggesting the existence of multiple, physiologically distinct positive emotions.

  17. Formal Methods for Autonomic and Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Swarms of intelligent rovers and spacecraft are being considered for a number of future NASA missions. These missions will provide MSA scientist and explorers greater flexibility and the chance to gather more science than traditional single spacecraft missions. These swarms of spacecraft are intended to operate for large periods of time without contact with the Earth. To do this, they must be highly autonomous, have autonomic properties and utilize sophisticated artificial intelligence. The Autonomous Nano Technology Swarm (ANTS) mission is an example of one of the swarm type of missions NASA is considering. This mission will explore the asteroid belt using an insect colony analogy cataloging the mass, density, morphology, and chemical composition of the asteroids, including any anomalous concentrations of specific minerals. Verifying such a system would be a huge task. This paper discusses ongoing work to develop a formal method for verifying swarm and autonomic systems.

  18. Experimental design for assessing the effectiveness of autonomous countermine systems

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; May, Michael; Moses, Franklin L.

    2010-04-01

    The countermine mission (CM) is a compelling example of what autonomous systems must address to reduce risks that Soldiers take routinely. The list of requirements is formidable and includes autonomous navigation, autonomous sensor scanning, platform mobility and stability, mobile manipulation, automatic target recognition (ATR), and systematic integration and control of components. This paper compares and contrasts how the CM is done today against the challenges of achieving comparable performance using autonomous systems. The Soldier sets a high standard with, for example, over 90% probability of detection (Pd) of metallic and low-metal mines and a false alarm rate (FAR) as low as 0.05/m2. In this paper, we suggest a simplification of the semi-autonomous CM by breaking it into three components: sensor head maneuver, robot navigation, and kill-chain prosecution. We also discuss the measurements required to map the system's physical and state attributes to performance specifications and note that current Army countermine metrics are insufficient to the guide the design of a semi-autonomous countermine system.

  19. Equipment Proposal for the Autonomous Vehicle Systems Laboratory at UIW

    DTIC Science & Technology

    2015-04-29

    Conference. 17-MAY-15, . : , Michael T. Frye, Robert S. Provence. Direct Inverse Control using an Artificial Neural Network for the Autonomous Hover of...As a first step to demonstrating this objective, the PI has been investigating a Machine Learning technique using Direct Inverse Control for the... control of a formation of multi-agent autonomous systems in uncertain dynamic environments. The educational mission of this laboratory is to introduce new

  20. Vision system for dial gage torque wrench calibration

    NASA Astrophysics Data System (ADS)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  1. Development of an Automatic Identification System Autonomous Positioning System.

    PubMed

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-11-11

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts.

  2. Development of an Automatic Identification System Autonomous Positioning System

    PubMed Central

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-01-01

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts. PMID:26569258

  3. Night vision imaging system lighting evaluation methodology

    NASA Astrophysics Data System (ADS)

    Task, H. Lee; Pinkus, Alan R.; Barbato, Maryann H.; Hausmann, Martha A.

    2005-05-01

    In order for night vision goggles (NVGs) to be effective in aircraft operations, it is necessary for the cockpit lighting and displays to be NVG compatible. It has been assumed that the cockpit lighting is compatible with NVGs if the radiance values are compliant with the limits listed in Mil-L-85762A and Mil-Std-3009. However, these documents also describe a NVG-lighting compatibility field test procedure that is based on visual acuity. The objective of the study described in this paper was to determine how reliable and precise the visual acuity-based (VAB) field evaluation method is and compare it to a VAB method that employs less expensive equipment. In addition, an alternative, objective method of evaluating compatibility of the cockpit lighting was investigated. An inexpensive cockpit lighting simulator was devised to investigate two different interference conditions and six different radiance levels per condition. This paper describes the results, which indicate the objective method, based on light output of the NVGs, is more precise and reliable than the visual acuity-based method. Precision and reliability were assessed based on a probability of rejection (of the lighting system) function approach that was developed specifically for this study.

  4. Autonomous Car Parking System through a Cooperative Vehicular Positioning Network.

    PubMed

    Correa, Alejandro; Boquet, Guillem; Morell, Antoni; Lopez Vicario, Jose

    2017-04-13

    The increasing development of the automotive industry towards a fully autonomous car has motivated the design of new value-added services in Vehicular Sensor Networks (VSNs). Within the context of VSNs, the autonomous car, with an increasing number of on-board sensors, is a mobile node that exchanges sensed and state information within the VSN. Among all the value added services for VSNs, the design of new intelligent parking management architectures where the autonomous car will coexist with traditional cars is mandatory in order to profit from all the opportunities associated with the increasing intelligence of the new generation of cars. In this work, we design a new smart parking system on top of a VSN that takes into account the heterogeneity of cars and provides guidance to the best parking place for the autonomous car based on a collaborative approach that searches for the common good of all of them measured by the accessibility rate, which is the ratio of the free parking places accessible for an autonomous car. Then, we simulate a real parking lot and the results show that the performance of our system is close to the optimum considering different communication ranges and penetration rates for the autonomous car.

  5. Role of the autonomic nervous system in modulating cardiac arrhythmias.

    PubMed

    Shen, Mark J; Zipes, Douglas P

    2014-03-14

    The autonomic nervous system plays an important role in the modulation of cardiac electrophysiology and arrhythmogenesis. Decades of research has contributed to a better understanding of the anatomy and physiology of cardiac autonomic nervous system and provided evidence supporting the relationship of autonomic tone to clinically significant arrhythmias. The mechanisms by which autonomic activation is arrhythmogenic or antiarrhythmic are complex and different for specific arrhythmias. In atrial fibrillation, simultaneous sympathetic and parasympathetic activations are the most common trigger. In contrast, in ventricular fibrillation in the setting of cardiac ischemia, sympathetic activation is proarrhythmic, whereas parasympathetic activation is antiarrhythmic. In inherited arrhythmia syndromes, sympathetic stimulation precipitates ventricular tachyarrhythmias and sudden cardiac death except in Brugada and J-wave syndromes where it can prevent them. The identification of specific autonomic triggers in different arrhythmias has brought the idea of modulating autonomic activities for both preventing and treating these arrhythmias. This has been achieved by either neural ablation or stimulation. Neural modulation as a treatment for arrhythmias has been well established in certain diseases, such as long QT syndrome. However, in most other arrhythmia diseases, it is still an emerging modality and under investigation. Recent preliminary trials have yielded encouraging results. Further larger-scale clinical studies are necessary before widespread application can be recommended.

  6. Improving Human/Autonomous System Teaming Through Linguistic Analysis

    NASA Technical Reports Server (NTRS)

    Meszaros, Erica L.

    2016-01-01

    An area of increasing interest for the next generation of aircraft is autonomy and the integration of increasingly autonomous systems into the national airspace. Such integration requires humans to work closely with autonomous systems, forming human and autonomous agent teams. The intention behind such teaming is that a team composed of both humans and autonomous agents will operate better than homogenous teams. Procedures exist for licensing pilots to operate in the national airspace system and current work is being done to define methods for validating the function of autonomous systems, however there is no method in place for assessing the interaction of these two disparate systems. Moreover, currently these systems are operated primarily by subject matter experts, limiting their use and the benefits of such teams. Providing additional information about the ongoing mission to the operator can lead to increased usability and allow for operation by non-experts. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables the prediction of the operator's intent and allows the interface to supply the operator's desired information.

  7. Integrated vision-based GNC for autonomous rendezvous and capture around Mars

    NASA Astrophysics Data System (ADS)

    Strippoli, L.; Novelli, G.; Gil Fernandez, J.; Colmenarejo, P.; Le Peuvedic, C.; Lanza, P.; Ankersen, F.

    2015-06-01

    Integrated GNC (iGNC) is an activity aimed at designing, developing and validating the GNC for autonomously performing the rendezvous and capture phase of the Mars sample return mission as defined during the Mars sample return Orbiter (MSRO) ESA study. The validation cycle includes testing in an end-to-end simulator, in a real-time avionics-representative test bench and, finally, in a dynamic HW in the loop test bench for assessing the feasibility, performances and figure of merits of the baseline approach defined during the MSRO study, for both nominal and contingency scenarios. The on-board software (OBSW) is tailored to work with the sensors, actuators and orbits baseline proposed in MSRO. The whole rendezvous is based on optical navigation, aided by RF-Doppler during the search and first orbit determination of the orbiting sample. The simulated rendezvous phase includes also the non-linear orbit synchronization, based on a dedicated non-linear guidance algorithm robust to Mars ascent vehicle (MAV) injection accuracy or MAV failures resulting in elliptic target orbits. The search phase is very demanding for the image processing (IP) due to the very high visual magnitude of the target wrt. the stellar background, and the attitude GNC requires very high pointing stability accuracies to fulfil IP constraints. A trade-off of innovative, autonomous navigation filters indicates the unscented Kalman filter (UKF) as the approach that provides the best results in terms of robustness, response to non-linearities and performances compatibly with computational load. At short range, an optimized IP based on a convex hull algorithm has been developed in order to guarantee LoS and range measurements from hundreds of metres to capture.

  8. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing..., and the sale within the United States after importation of certain machine vision software, machine..., California; Techno Soft Systemnics, Inc. (``Techno Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd....

  9. Systems, methods and apparatus for quiesence of autonomic systems with self action

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided in which an autonomic unit or element is quiesced. A quiesce component of an autonomic unit can cause the autonomic unit to self-destruct if a stay-alive reprieve signal is not received after a predetermined time.

  10. Advances in Autonomous Systems for Missions of Space Exploration

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Briggs, G. A.; Hieronymus, J.; Clancy, D. J.

    New missions of space exploration will require unprecedented levels of autonomy to successfully accomplish their objectives. Both inherent complexity and communication distances will preclude levels of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of meeting the greatly increased space exploration requirements, along with dramatically reduced design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health monitoring and maintenance capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of space exploration, since the science and operational requirements specified by such missions, as well as the budgetary constraints that limit the ability to monitor and control these missions by a standing army of ground- based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communications distance as are not otherwise possible, as well as many more efficient and low cost

  11. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  12. Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic

    PubMed Central

    McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.

    2014-01-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  13. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  14. Guidance, navigation and control system for autonomous proximity operations and docking of spacecraft

    NASA Astrophysics Data System (ADS)

    Lee, Daero

    This study develops an integrated guidance, navigation and control system for use in autonomous proximity operations and docking of spacecraft. A new approach strategy is proposed based on a modified system developed for use with the International Space Station. It is composed of three "V-bar hops" in the closing transfer phase, two periods of stationkeeping and a "straight line V-bar" approach to the docking port. Guidance, navigation and control functions are independently designed and are then integrated in the form of linear Gaussian-type control. The translational maneuvers are determined through the integration of the state-dependent Riccati equation control formulated using the nonlinear relative motion dynamics with the weight matrices adjusted at the steady state condition. The reference state is provided by a guidance function, and the relative navigation is performed using a rendezvous laser vision system and a vision sensor system, where a sensor mode change is made along the approach in order to provide effective navigation. The rotational maneuvers are determined through a linear quadratic Gaussian-type control using star trackers and gyros, and a vision sensor. The attitude estimation mode change is made from absolute estimation to relative attitude estimation during the stationkeeping phase inside the approach corridor. The rotational controller provides the precise attitude control using weight matrices adjusted at the steady state condition, including the uncertainty of the moment of inertia and external disturbance torques. A six degree-of-freedom simulation demonstrates that the newly developed GNC system successfully autonomously performs proximity operations and meets the conditions for entering the final docking phase.

  15. A design strategy for autonomous systems

    NASA Technical Reports Server (NTRS)

    Forster, Pete

    1989-01-01

    Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.

  16. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    NASA Technical Reports Server (NTRS)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  17. Autonomous navigation system based on GPS and magnetometer data

    NASA Technical Reports Server (NTRS)

    Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)

    2004-01-01

    This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.

  18. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  19. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  20. Autonomous Dispersed Control System for Independent Micro Grid

    NASA Astrophysics Data System (ADS)

    Kawasaki, Kensuke; Matsumura, Shigenori; Iwabu, Koichi; Fujimura, Naoto; Iima, Takahito

    In this paper, we show an autonomous dispersed control system for independent micro grid of which performance has been substantiated in China by Shikoku Electric Power Co. and its subsidiary companies under the trust of NEDO (New Energy and Industrial Technology Development Organization). For the control of grid interconnected generators, the exclusive information line is very important to save fuel cost and maintain high frequency quality on electric power supply, but it is relatively expensive in such small micro grid. We contrived an autonomous dispersed control system without any exclusive information line for dispatching control and adjusting supply control. We have confirmed through the substantiation project in China that this autonomous dispersed control system for independent micro grid has a well satisfactory characteristic from the view point of less fuel consumption and high electric quality.

  1. Building Artificial Vision Systems with Machine Learning

    SciTech Connect

    LeCun, Yann

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  2. Autonomous-Control Concept For Instrument Pointing System

    NASA Technical Reports Server (NTRS)

    Mettler, Edward; Milman, Mark H.; Bayard, David S.

    1990-01-01

    Integrated payload articulation and identification system (IPAIDS) is conceptual system to control aiming of instruments aboard spacecraft of proposed Earth Observation System (EOS). Principal features of concept include advanced control strategies intended to assure robustness of performance over wide range of uncertainties in characteristics of spacecraft and instrument system. Intended originally for application to spacecraft system, has potential utility on Earth for automatic control of autonomous (robotic) vehicles or of remote sensing systems.

  3. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments.

    PubMed

    Kim, Youngsun; Hwang, Dong-Hwan

    2016-10-12

    In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  4. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    PubMed Central

    Kim, Youngsun; Hwang, Dong-Hwan

    2016-01-01

    In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient. PMID:27754350

  5. Space station automation study: Autonomous systems and assembly, volume 2

    NASA Technical Reports Server (NTRS)

    Bradford, K. Z.

    1984-01-01

    This final report, prepared by Martin Marietta Denver Aerospace, provides the technical results of their input to the Space Station Automation Study, the purpose of which is to develop informed technical guidance in the use of autonomous systems to implement space station functions, many of which can be programmed in advance and are well suited for automated systems.

  6. An autonomous control system for boiler-turbine units

    SciTech Connect

    Ben-Abdennour, A.; Lee, K.Y.

    1996-06-01

    Achieving a more autonomous power plant operation is an important part of power plant control. To be autonomous, a control system needs to provide adequate control actions in the presence of significant uncertainties and/or disturbances, such as actuator or component failures, with minimum or no human assistance. However, a reasonable degree of autonomy is difficult to obtain without incorporating intelligence in the control system. This paper presents a coordinated intelligent control scheme with a high degree of autonomy. In this scheme, a Fuzzy-Logic based supervisor monitors the overall plant operation and carries the tasks of coordination, fault diagnosis, fault isolation, and fault accommodation.

  7. REACT - A Third Generation Language For Autonomous Robot Systems

    NASA Astrophysics Data System (ADS)

    Longley, Maxwell J.; Owens, John; Allen, Charles R.; Ratcliff, Karl

    1990-03-01

    REACT is a language under development at Newcastle for the programming of autonomous robot systems, which uses AI constructs and sensor information to respond to failures in assumptions about the real-world by replanning a task. This paper describes the important features of a REACT programmed robotic system, and the results of some initial studies made on defining an executive language using a concept called visiblity sets. Several examples from the language are then applied to specific examples e.g. a white line follower and a railway network controller. The applicability of visibility sets to autonomous robots is evaluated.

  8. Turning a remotely controllable observatory into a fully autonomous system

    NASA Astrophysics Data System (ADS)

    Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael

    2014-08-01

    We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.

  9. Autonomous Control and Diagnostics of Space Reactor Systems

    SciTech Connect

    Upadhyaya, B.R.; Xu, X.; Perillo, S.R.P.; Na, M.G.

    2006-07-01

    This paper describes three key features of the development of an autonomous control strategy for space reactor systems. These include the development of a reactor simulation model for transient analysis, development of model-predictive control as part of the autonomous control strategy, and a fault detection and isolation module. The latter is interfaced with the control supervisor as part of a hierarchical control system. The approach has been applied to the nodal model of the SP-100 reactor with a thermo-electric generator. The results of application demonstrate the effectiveness of the control approach and its ability to reconfigure the control mode under fault conditions. (authors)

  10. Autonomous satellite navigation with the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Fuchs, A. J.; Wooden, W. H., II; Long, A. C.

    1977-01-01

    This paper discusses the potential of using the Global Positioning System (GPS) to provide autonomous navigation capability to NASA satellites in the 1980 era. Some of the driving forces motivating autonomous navigation are presented. These include such factors as advances in attitude control systems, onboard science annotation, and onboard gridding of imaging data. Simulation results which demonstrate baseline orbit determination accuracies using GPS data on Seasat, Landsat-D, and the Solar Maximum Mission are presented. Emphasis is placed on identifying error sources such as GPS time, GPS ephemeris, user timing biases, and user orbit dynamics, and in a parametric sense on evaluating their contribution to the orbit determination accuracies.

  11. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  12. Trust Management in Swarm-Based Autonomic Computing Systems

    SciTech Connect

    Maiden, Wendy M.; Haack, Jereme N.; Fink, Glenn A.; McKinnon, Archibald D.; Fulp, Errin W.

    2009-07-07

    Reputation-based trust management techniques can address issues such as insider threat as well as quality of service issues that may be malicious in nature. However, trust management techniques must be adapted to the unique needs of the architectures and problem domains to which they are applied. Certain characteristics of swarms such as their lightweight ephemeral nature and indirect communication make this adaptation especially challenging. In this paper we look at the trust issues and opportunities in mobile agent swarm-based autonomic systems and find that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarms. We also analyze the applicability of trust management research as it has been applied to architectures with similar characteristics. Finally, we specify required characteristics for trust management mechanisms to be used to monitor the trustworthiness of the entities in a swarm-based autonomic computing system.

  13. Music and Autonomic Nervous System (Dys)function

    PubMed Central

    Ellis, Robert J.; Thayer, Julian F.

    2010-01-01

    Despite a wealth of evidence for the involvement of the autonomic nervous system (ANS) in health and disease and the ability of music to affect ANS activity, few studies have systematically explored the therapeutic effects of music on ANS dysfunction. Furthermore, when ANS activity is quantified and analyzed, it is usually from a point of convenience rather than from an understanding of its physiological basis. After a review of the experimental and therapeutic literatures exploring music and the ANS, a “Neurovisceral Integration” perspective on the interplay between the central and autonomic nervous systems is introduced, and the associated implications for physiological, emotional, and cognitive health are explored. The construct of heart rate variability is discussed both as an example of this complex interplay and as a useful metric for exploring the sometimes subtle effect of music on autonomic response. Suggestions for future investigations using musical interventions are offered based on this integrative account. PMID:21197136

  14. Area scanning vision inspection system by using mirror control

    NASA Astrophysics Data System (ADS)

    Jeong, Sang Y.; Min, Sungwook; Yang, Wonyoung

    2001-02-01

    12 As the pressure increases to deliver vision products with faster speed while inspection higher resolution at lower cost, the area scanning vision inspection system can be one of the good solutions. To inspect large area with high resolution, the conventional vision system requires moving either camera or the target, therefore, the system suffers low speed and high cost due to the requirements of mechanical moving system or higher resolution camera. Because there are only tiny mirror angle movements required to change the field of view, the XY mirror controlled area scanning vision system is able to capture random area images with high speed. Elimination of external precise moving mechanism is another benefit of the mirror control. The image distortion due to the lens and the mirror system shall be automatically compensated right after each image captured so that the absolute coordination can be calculated in real- time. Motorized focusing system is used for the large area inspection, so that the proper focusing achieved for the variable working distance between lens and targets by the synchronization to the mirror scanning system. By using XY mirror controlled area scanning vision inspection system, fast and economic system can be integrated while no vibration induced and smaller space required. This paper describes the principle of the area scanning method, optical effects of the scanning, position calibration method, inspection flows and some of implementation results.

  15. Planning In A Hierarchical Nested Autonomous Control System

    NASA Astrophysics Data System (ADS)

    Meystel, A.

    1987-02-01

    In this paper, theoretical foundations of planning processes are outlined in a form applicable for design and control of autonomous mobile robots. Planning/control is shown to be a unified recursive operation of decision making applied to a nested hierarchy of knowledge representation. The core of the theory is based upon methods developed in the areas of Post-production systems, theory of coding, and the team theory of decentralized stochastic control. A class of autonomous control systems for robots is defined, and a problem of information representation is addressed for this class. A phenomenon of nesting is analyzed and the minimum c-entropy rule is determined for arranging efficient design and control procedures for systems of intelligent control. A concept of nested hierarchical knowledge-based controller is employed in this paper which enables minimum-time control using nested dynamic programming. An application of this concept is unfolded for a system of knowledge-based control of an autonomous mobile robot. Key words: Autonomous Control Systems, Decision Making, Production Systems, Decentralized Stochastic Control, Dynamic Programming, Hierarchical Control, Knowledge Based Controllers, E-entropy, Planning, Navigation, Guidance, Prediction, Contingencies, Mobile Robots.

  16. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  17. The research on projective visual system of night vision goggles

    NASA Astrophysics Data System (ADS)

    Zhao, Shun-long

    2009-07-01

    Driven by the need for lightweight night vision goggles with good performance, we apply the projective lens into night vision goggles to act as visual system. A 40-deg FOV projection lens is provided. The useful diameter of the image intensifier is 16mm, and the Resolutions at Center and edge are both 60-lp/mm. The projection lens has a 28mm diameter and 20g weight. The maximum distortion of the system is less than 0.15%. The MTF maintained more than 0.6 at a 60-lp/mm resolution across the FOV. So the lens meets the requirements of the visual system. Besides, two types of projective visual system of night vision goggles are presented: the Direct-view projective visual system and the Seethrough projective visual system. And the See-through projective visual system enables us to observe the object with our eyes directly, without other action, when the environment becomes bright in a sudden. Finally we have reached a conclusion: The projective system has advantages over traditional eyepiece in night vision goggles. It is very useful to minish the volume, lighten the neck supports, and improve the imaging quality. It provides a new idea and concept for visual system design in night vision goggles.

  18. A functional system architecture for fully autonomous robot

    NASA Astrophysics Data System (ADS)

    Kalaycioglu, S.

    The Mobile Servicing System (MSS) Autonomous Robotics Program intends to define and plan the development of technologies required to provide a supervised autonomous operation capability for the Special Purpose Dexterous Manipulator (SPDM) on the MSS. The operational functions for the SPDM to perform the required tasks, both in fully autonomous or supervised modes, are identified. Functional decomposition is performed using a graphics oriented methodology called Structural Analysis Design Technique. This process defines the functional architecture of the system, the types of data required to support its functionality, and the control processes that need to be emplaced. On the basis of the functional decomposition, a technology breakdown structure is also developed. A preliminary estimate of the status and maturity of each relevant technology is made, based on this technology breakdown. The developed functional hierarchy is found to be very effective for a robotic system with any level of autonomy. Moreover, this hierarchy can easily be applied to an existing very low level autonomous system and can provide a smooth transition towards a higher degree of autonomy. The effectiveness of the developed functional hierarchy will also play a very significant role both in the system design as well as in the development of the control hierarchy.

  19. Using Multimodal Input for Autonomous Decision Making for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Neilan, James H.; Cross, Charles; Rothhaar, Paul; Tran, Loc; Motter, Mark; Qualls, Garry; Trujillo, Anna; Allen, B. Danette

    2016-01-01

    Autonomous decision making in the presence of uncertainly is a deeply studied problem space particularly in the area of autonomous systems operations for land, air, sea, and space vehicles. Various techniques ranging from single algorithm solutions to complex ensemble classifier systems have been utilized in a research context in solving mission critical flight decisions. Realized systems on actual autonomous hardware, however, is a difficult systems integration problem, constituting a majority of applied robotics development timelines. The ability to reliably and repeatedly classify objects during a vehicles mission execution is vital for the vehicle to mitigate both static and dynamic environmental concerns such that the mission may be completed successfully and have the vehicle operate and return safely. In this paper, the Autonomy Incubator proposes and discusses an ensemble learning and recognition system planned for our autonomous framework, AEON, in selected domains, which fuse decision criteria, using prior experience on both the individual classifier layer and the ensemble layer to mitigate environmental uncertainty during operation.

  20. Mamdani Fuzzy System for Indoor Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.

    2011-06-01

    Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.

  1. Autonomous Frequency-Domain System-Identification Program

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Mettler, Edward; Bayard, David S.; Hadaegh, Fred Y.; Milman, Mark H.; Scheid, Robert E.

    1993-01-01

    Autonomous Frequency Domain Identification (AU-FREDI) computer program implements system of methods, algorithms, and software developed for identification of parameters of mathematical models of dynamics of flexible structures and characterization, by use of system transfer functions, of such models, dynamics, and structures regarded as systems. Software considered collection of routines modified and reassembled to suit system-identification and control experiments on large flexible structures.

  2. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    SciTech Connect

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  3. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  4. Modeling and Control Strategies for Autonomous Robotic Systems

    DTIC Science & Technology

    1991-12-23

    Robotic Systems 12 PERSONAL AUTHOR(S) Roger W. Brockett A]&. TYPE Of REPORT 113b. TIME COVERD 114 DATt__Of RE PVRT (Year- month, Day) S.PAGE COUNT Final...NO. ACCESSION NO Reseaich Triangle Park, NC 27709-2211I I 11 TITLE (ir-’-4 Cae-unrv Canmuicauon) Modeling and Control Strategies for Autonomous

  5. Is There Anything "Autonomous" in the Nervous System?

    ERIC Educational Resources Information Center

    Rasia-Filho, Alberto A.

    2006-01-01

    The terms "autonomous" or "vegetative" are currently used to identify one part of the nervous system composed of sympathetic, parasympathetic, and gastrointestinal divisions. However, the concepts that are under the literal meaning of these words can lead to misconceptions about the actual nervous organization. Some clear-cut examples indicate…

  6. Cancellation of the Army’s Autonomous Navigation System

    DTIC Science & Technology

    2012-08-02

    by Army Red Team System Name Sponsor ANS-Like Capabilities Cargo-UGV Marine Warfighting Lab Remote Operation, Vehicle Leader / Follower , Road...Following Ground Unmanned Support Surrogate Marine Warfighting Lab Remote Operation, Soldier Leader / Follower Convoy Active Safety Technologies Army...Tank and Automotive Research, Development, and Engineering Command Remote Operation, Vehicle Leader / Follower , Road Following Mobile Autonomous

  7. Random attractor of non-autonomous stochastic Boussinesq lattice system

    SciTech Connect

    Zhao, Min Zhou, Shengfan

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  8. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  9. Multiple-channel Streaming Delivery for Omnidirectional Vision System

    NASA Astrophysics Data System (ADS)

    Iwai, Yoshio; Nagahara, Hajime; Yachida, Masahiko

    An omnidirectional vision is an imaging system that can capture a surrounding image in whole direction by using a hyperbolic mirror and a conventional CCD camera. This paper proposes a streaming server that can efficiently transfer movies captured by an omnidirectional vision system through the Internet. The proposed system uses multiple channels to deliver multiple movies synchronously. Through this method, the system enables clients to view the different direction of omnidirectional movies and also support the function to change the view are during playback period. Our evaluation experiments show that our proposed streaming server can effectively deliver multiple movies via multiple channels.

  10. Machine vision system for online inspection of freshly slaughtered chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  11. Blackboard architectures and their relationship to autonomous space systems

    NASA Technical Reports Server (NTRS)

    Thornbrugh, Allison

    1988-01-01

    The blackboard architecture provides a powerful paradigm for the autonomy expected in future spaceborne systems, especially SDI and Space Station. Autonomous systems will require skill in both the classic task of information analysis and the newer tasks of decision making, planning and system control. Successful blackboard systems have been built to deal with each of these tasks separately. The blackboard paradigm achieves success in difficult domains through its ability to integrate several uncertain sources of knowledge. In addition to flexible behavior during autonomous operation, the system must also be capable of incrementally growing from semiautonomy to full autonomy. The blackboard structure allows this development. The blackboard's ability to handle error, its flexible execution, and variants of this paradigm are discussed as they apply to specific problems of the space environment.

  12. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  13. A modular real-time vision system for humanoid robots

    NASA Astrophysics Data System (ADS)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  14. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  15. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  16. Skin biopsies in the assessment of the autonomic nervous system.

    PubMed

    Wang, Ningshan; Gibbons, Christopher H

    2013-01-01

    Cutaneous punch biopsies are widely used to evaluate nociceptive C fibers in patients with suspected small-fiber neuropathy. Recent advances in immunohistochemical techniques and interest in cutaneous autonomic innervation has expanded the role of skin biopsy in the evaluation of the peripheral nervous system. The dermal layers of the skin provide a unique window into the structural evaluation of the autonomic nervous system. Peripheral adrenergic and cholinergic fibers innervate a number of cutaneous structures, such as sweat glands and arrector pili muscles, and can easily be seen with punch skin biopsies. Skin biopsies allow for both regional sampling, in diseases with patchy distribution, and the opportunity for repeated sampling in progressive disorders. The structural evaluation of cutaneous autonomic innervation is still in its scientific infancy, with a number of different methodologies and techniques that will require standardization and widespread acceptance before becoming a standard of care. Future studies of autonomic innervation in acquired, hereditary, neurodegenerative, or autoimmune disorders will be necessary to determine the clinical utility of skin biopsy in these disease states.

  17. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.

  18. Scheduling lessons learned from the Autonomous Power System

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  19. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard (Inventor)

    1994-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  20. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)

    1996-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  1. Miniature Autonomous Rocket Recovery System (MARRS)

    DTIC Science & Technology

    2011-05-01

    ChIMU = Cheap Inertial Measurement Unit GNC = Guidance, Navigation and Control GPS = Global Positioning System INS = Inertial Navigation System...factors during deployment and necessity to employ integrated GPS/INS navigation system rather than just GPS-based GNC system. This paper focuses on...delivering payloads even closer to the target.3,4 Specifically, this paper explores a capability to utilize an advanced GNC system developed for the

  2. Multisensor robotic system for autonomous space maintenance and repair

    NASA Technical Reports Server (NTRS)

    Abidi, M. A.; Green, W. L.; Chandra, T.; Spears, J.

    1988-01-01

    The feasibility of realistic autonomous space manipulation tasks using multisensory information is demonstrated. The system is capable of acquiring, integrating, and interpreting multisensory data to locate, mate, and demate a Fluid Interchange System (FIS) and a Module Interchange System (MIS). In both cases, autonomous location of a guiding light target, mating, and demating of the system are performed. Implemented visio-driven techniques are used to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. A force/torque sensor continuously monitors the six components of force and torque exerted on the end-effector. Both FIS and MIS experiments were successfully accomplished on mock-ups built for this purpose. The method is immune to variations in the ambient light, in particular because of the 90-minute day-night shift in space.

  3. Toward autonomous driving: The CMU Navlab. II - Architecture and systems

    NASA Technical Reports Server (NTRS)

    Thorpe, Charles; Hebert, Martial; Kanade, Takeo; Shafer, Steven

    1991-01-01

    A description is given of EDDIE, the architecture for the Navlab mobile robot which provides a toolkit for building specific systems quickly and easily. Included in the discussion are the annotated maps used by EDDIE and the Navlab's road-following system, called the Autonomous Mail Vehicle, which was built using EDDIE and its annotated maps as a basis. The contributions of the Navlab project and the lessons learned from it are examined.

  4. Autonomous Systems in Human Behavior and Development

    ERIC Educational Resources Information Center

    Wolff, P.

    1974-01-01

    Reviews research which demonstrates that responses from different behavior systems to a given stimulus situation may be far from perfectly correlated with each other. Discusses the phylogenetic and ontogenetic development of these systems and the roles of both the species and the individual in bringing the systems into mutual correspondence.…

  5. The organization of an autonomous learning system

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    The organization of systems that learn from experience is examined, human beings and animals being prime examples of such systems. How is their information processing organized. They build an internal model of the world and base their actions on the model. The model is dynamic and predictive, and it includes the systems' own actions and their effects. In modeling such systems, a large pattern of features represents a moment of the system's experience. Some of the features are provided by the system's senses, some control the system's motors, and the rest have no immediate external significance. A sequence of such patterns then represents the system's experience over time. By storing such sequences appropriately in memory, the system builds a world model based on experience. In addition to the essential function of memory, fundamental roles are played by a sensory system that makes raw information about the world suitable for memory storage and by a motor system that affects the world. The relation of sensory and motor systems to the memory is discussed, together with how favorable actions can be learned and unfavorable actions can be avoided. Results in classical learning theory are explained in terms of the model, more advanced forms of learning are discussed, and the relevance of the model to the frame problem of robotics is examined.

  6. Analysis of the development and the prospects about vehicular infrared night vision system

    NASA Astrophysics Data System (ADS)

    Li, Jing; Fan, Hua-ping; Xie, Zu-yun; Zhou, Xiao-hong; Yu, Hong-qiang; Huang, Hui

    2013-08-01

    Through the classification of vehicular infrared night vision system and comparing the mainstream vehicle infrared night vision products, we summarized the functions of vehicular infrared night vision system which conclude night vision, defogging , strong-light resistance and biological recognition. At the same time , the vehicular infrared night vision system's markets of senior car and fire protection industry were analyzed。Finally, the conclusion was given that vehicle infrared night vision system would be used as a safety essential active safety equipment to promote the night vision photoelectric industry and automobile industry.

  7. Large autonomous spacecraft electrical power system (LASEPS)

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Johnson, Yvette B.

    1992-01-01

    NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed.

  8. Autonomic nervous system activities during motor imagery in elite athletes.

    PubMed

    Oishi, Kazuo; Maeshima, Takashi

    2004-01-01

    Motor imagery (MI), a mental simulation of voluntary motor actions, has been used as a training method for athletes for many years. It is possible that MI techniques might similarly be useful as part of rehabilitative strategies to help people regain skills lost as a consequence of diseases or stroke. Mental activity and stress induce several different autonomic responses as part of the behavioral response to movement (e.g., motor anticipation) and as part of the central planning and preprogramming of movement. However, the interrelationships between MI, the autonomic responses, and the motor system have not yet been worked out. The authors compare a number of autonomic responses (respiration, heart rate, electro skin resistance) and motoneuron excitability (soleus H-reflex) in elite and nonelite speed skaters during MI. In contrast to the nonelite athletes, MI of elite speed skaters is characterized by larger changes in heart rate and respiration, a greater reliance on an internal perspective for MI, a more vivid MI, a more accurate correspondence between the MI and actual race times, and decreased motoneuron excitability. Two observations suggest that the changes in the autonomic responses and motoneuron excitability for the elite speed skaters are related to the effects of central motor programming: (1) there was no correlation between the autonomic responses for MI and those recorded during mental arithmetic; and (2) mental arithmetic did not significantly alter motoneuron activity. It is suggested that in elite speed skaters, the descending neural mechanisms that reduce motoneuron excitability are activated even when full, vivid MI is performed internally. These inhibitory responses of the motor system may enhance actual motor performance under conditions of remarkably high mental stress, such as that which occurs in the Olympic games.

  9. Workshop on Assurance for Autonomous Systems for Aviation

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Davies, Misty; Giannakopoulou, Dimitra; Neogi, Natasha

    2016-01-01

    This report describes the workshop on Assurance for Autonomous Systems for Aviation that was held in January 2016 in conjunction with the SciTech 2016 conference held in San Diego, CA. The workshop explored issues related to assurance for autonomous systems and also the idea of trust in these systems. Specifically, we focused on discussing current practices for assurance of autonomy, identifying barriers specific to autonomy as related to assurance as well as operational scenarios demonstrating the need to address the barriers. Furthermore, attention was given to identifying verification techniques that may be applicable to autonomy, as well as discussing new research directions needed to address barriers, thereby involving potential shifts in current practices.

  10. Intelligent systems for the autonomous exploration of Titan and Enceladus

    NASA Astrophysics Data System (ADS)

    Furfaro, Roberto; Lunine, Jonathan I.; Kargel, Jeffrey S.; Fink, Wolfgang

    2008-04-01

    Future planetary exploration of the outer satellites of the Solar System will require higher levels of onboard automation, including autonomous determination of sites where the probability of significant scientific findings is highest. Generally, the level of needed automation is heavily influenced by the distance between Earth and the robotic explorer(s) (e.g. spacecraft(s), rover(s), and balloon(s)). Therefore, planning missions to the outer satellites mandates the analysis, design and integration within the mission architecture of semi- and/or completely autonomous intelligence systems. Such systems should (1) include software packages that enable fully automated and comprehensive identification, characterization, and quantification of feature information within an operational region with subsequent target prioritization and selection for close-up reexamination; and (2) integrate existing information with acquired, "in transit" spatial and temporal sensor data to automatically perform intelligent planetary reconnaissance, which includes identification of sites with the highest potential to yield significant geological and astrobiological information. In this paper we review and compare some of the available Artificial Intelligence (AI) schemes and their adaptation to the problem of designing expert systems for onboard-based, autonomous science to be performed in the course of outer satellites exploration. More specifically, the fuzzy-logic framework proposed is analyzed in some details to show the effectiveness of such a scheme when applied to the problem of designing expert systems capable of identifying and further exploring regions on Titan and/or Enceladus that have the highest potential to yield evidence for past or present life. Based on available information (e.g., Cassini data), the current knowledge and understanding of Titan and Enceladus environments is evaluated to define a path for the design of a fuzzy-based system capable of reasoning over

  11. A Test-Bed Configuration: Toward an Autonomous System

    NASA Astrophysics Data System (ADS)

    Ocaña, F.; Castillo, M.; Uranga, E.; Ponz, J. D.; TBT Consortium

    2015-09-01

    In the context of the Space Situational Awareness (SSA) program of ESA, it is foreseen to deploy several large robotic telescopes in remote locations to provide surveillance and tracking services for man-made as well as natural near-Earth objects (NEOs). The present project, termed Telescope Test Bed (TBT) is being developed under ESA's General Studies and Technology Programme, and shall implement a test-bed for the validation of an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor NEO services. In order to fulfill all the security requirements for the TBT project, the use of a autonomous emergency system (AES) is foreseen to monitor the control system. The AES will monitor remotely the health of the observing system and the internal and external environment. It will incorporate both autonomous and interactive actuators to force the protection of the system (i.e., emergency dome close out).

  12. Ceramic substrate's detection system based on machine vision

    NASA Astrophysics Data System (ADS)

    Yang, Li-na; Zhou, Zhen-feng; Zhu, Li-jun

    2009-05-01

    Machine vision detection technology is an integrated modern inspection technology including optoelectronics, computer image, information processing and computer vision etc. It regards image as means and carrier of transmitting information, and extracts useful information from image and acquires all kinds of necessary parameters by dealing with images. Combining key project in Zhejiang Province Office of Education-research of high accuracy and large size machine vision automatic detection and separation technology. The paper describes the primary factors of influencing system's precision, develops an automatic detection system of ceramic substrate. The system gathers the image of ceramic substrate by CMOS( Complementary Metal-Oxide Semiconductor). The quality of image is improved by optical imaging and lighting system. The precision of edge detection is improved by image preprocessing and sub-pixel. In image enhancement part , image filter and geometric distortion correction are used. Edges are obtained through a sub-pixel edge detection method: determining the probable position of image edge by advanced Sobel operator and then taking three-order spline interpolation function to interpolate the gray edge image. The mathematical modeling of dimensional and geometric error of visual inspection system is developed. The parameters of ceramic substrate's length, and width are acquired. The experiment results show that the presented method in this paper increases the precision of vision detection system , and measuring results of this system are satisfying.

  13. Multi-agent autonomous system and method

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A method of controlling a plurality of crafts in an operational area includes providing a command system, a first craft in the operational area coupled to the command system, and a second craft in the operational area coupled to the command system. The method further includes determining a first desired destination and a first trajectory to the first desired destination, sending a first command from the command system to the first craft to move a first distance along the first trajectory, and moving the first craft according to the first command. A second desired destination and a second trajectory to the second desired destination are determined and a second command is sent from the command system to the second craft to move a second distance along the second trajectory.

  14. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  15. A laser-based vision system for weld quality inspection.

    PubMed

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.

  16. Mathematical biomarkers for the autonomic regulation of cardiovascular system.

    PubMed

    Campos, Luciana A; Pereira, Valter L; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-10-07

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance.

  17. Mathematical biomarkers for the autonomic regulation of cardiovascular system

    PubMed Central

    Campos, Luciana A.; Pereira, Valter L.; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-01-01

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance. PMID:24109456

  18. Autonomous Aerial Payload Delivery System Blizzard

    DTIC Science & Technology

    2011-05-01

    known systems. Another technique to achieve a high touchdown accuracy is networking, enabling communication between multiple descending ADSs, UAV...Global System for Mobile ( Communications ) MCCC = mission C2 center PATCAD = Precision Airdrop Technology Conference and Demonstration SA = situational... high performance gimbal (seen in Fig.1 and shown in more details in Fig.2) featuring a full 360° un-obstructed field of view, direct drive

  19. The impact of changing night vision goggle spectral response on night vision imaging system lighting compatibility

    NASA Astrophysics Data System (ADS)

    Task, Harry L.; Marasco, Peter L.

    2004-09-01

    The defining document outlining night-vision imaging system (NVIS) compatible lighting, MIL-L-85762A, was written in the mid 1980's, based on what was then the state of the art in night vision and image intensification. Since that time there have been changes in the photocathode sensitivity and the minus-blue coatings applied to the objective lenses. Specifically, many aviation night-vision goggles (NVGs) in the Air Force are equipped with so-called "leaky green" or Class C type objective lens coatings that provide a small amount of transmission around 545 nanometers so that the displays that use a P-43 phosphor can be seen through the NVGs. However, current NVIS compatibility requirements documents have not been updated to include these changes. Documents that followed and replaced MIL-L-85762A (ASC/ENFC-96-01 and MIL-STD-3009) addressed aspects of then current NVIS technology, but did little to change the actual content or NVIS radiance requirements set forth in the original MIL-L-85762A. This paper examines the impact of spectral response changes, introduced by changes in image tube parameters and objective lens minus-blue filters, on NVIS compatibility and NVIS radiance calculations. Possible impact on NVIS lighting requirements is also discussed. In addition, arguments are presented for revisiting NVIS radiometric unit conventions.

  20. Cloud Absorption Radiometer Autonomous Navigation System - CANS

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Gatebe, Charles; McCune, Bill; Hellwig, Dustan

    2013-01-01

    CAR (cloud absorption radiometer) acquires spatial reference data from host aircraft navigation systems. This poses various problems during CAR data reduction, including navigation data format, accuracy of position data, accuracy of airframe inertial data, and navigation data rate. Incorporating its own navigation system, which included GPS (Global Positioning System), roll axis inertia and rates, and three axis acceleration, CANS expedites data reduction and increases the accuracy of the CAR end data product. CANS provides a self-contained navigation system for the CAR, using inertial reference and GPS positional information. The intent of the software application was to correct the sensor with respect to aircraft roll in real time based upon inputs from a precision navigation sensor. In addition, the navigation information (including GPS position), attitude data, and sensor position details are all streamed to a remote system for recording and later analysis. CANS comprises a commercially available inertial navigation system with integral GPS capability (Attitude Heading Reference System AHRS) integrated into the CAR support structure and data system. The unit is attached to the bottom of the tripod support structure. The related GPS antenna is located on the P-3 radome immediately above the CAR. The AHRS unit provides a RS-232 data stream containing global position and inertial attitude and velocity data to the CAR, which is recorded concurrently with the CAR data. This independence from aircraft navigation input provides for position and inertial state data that accounts for very small changes in aircraft attitude and position, sensed at the CAR location as opposed to aircraft state sensors typically installed close to the aircraft center of gravity. More accurate positional data enables quicker CAR data reduction with better resolution. The CANS software operates in two modes: initialization/calibration and operational. In the initialization/calibration mode

  1. Representing Autonomous Systems Self-Confidence through Competency Boundaries

    DTIC Science & Technology

    2015-01-01

    Thomas Hughes Infoscitex Corporation , Dayton, OH, USA A method for determining the self-confidence of autonomous systems is proposed to assist...teaming. ACKNOWLEDGEMENTS The work presented was supported by the Air Force Research Lab (AFRL) and Infoscitex Corporation . REFERENCES...Henson, T. D., Daniels, J. W., Jordan, J. D., Shcroder, K. L., & Shokair, I. R. (1998). Sandia multispectral airborne lidar for UAV deployment

  2. Robotic reactions: delay-induced patterns in autonomous vehicle systems.

    PubMed

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  3. Software Testbed for Developing and Evaluating Integrated Autonomous Systems

    DTIC Science & Technology

    2015-03-01

    978-1-4799-5380-6/15/$31.00 ©2015 IEEE 1 Software Testbed for Developing and Evaluating Integrated Autonomous Systems James Ong , Emilio...Remolina, Axel Prompt Stottler Henke Associates, Inc. 1670 S. Amphlett Blvd., suite 310 San Mateo, CA 94402 650-931-2700 ong , remolina, aprompt...www.stottlerhenke.com/datamontage/ [13] Ong , J., E. Remolina, D. E. Smith, M. S. Boddy (2013) A Visual Integrated Development Environment for Automated Planning

  4. Adapting the Law of Armed Conflict to Autonomous Weapon Systems

    DTIC Science & Technology

    2014-01-01

    decisions more controlled, espe- cially compared to human-soldier failings that are so often exacerbated by panic, vengeance or other emotions , as well...significant claims in Losing Humanity—that a fundamental objection to autonomous weapon systems is that they take these emotions out of battlefield...of human soldiers’ battle- field emotions , starting with fear, anger and vengeance, exacerbated under conditions of hunger, exposure, uncertainty and

  5. Control Problems in Autonomous Life Support Systems

    NASA Technical Reports Server (NTRS)

    Colombano, S.

    1982-01-01

    The problem of constructing life support systems which require little or no input of matter (food and gases) for long, or even indefinite, periods of time is addressed. Natural control in ecosystems, a control theory for ecosystems, and an approach to the design of an ALSS are addressed.

  6. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  7. Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems

    NASA Technical Reports Server (NTRS)

    Bujorianu, Marius C.; Bujorianu, Manuela L.

    2009-01-01

    In this paper, we sketch a framework for interdisciplinary modeling of space systems, by proposing a holistic view. We consider different system dimensions and their interaction. Specifically, we study the interactions between computation, physics, communication, uncertainty and autonomy. The most comprehensive computational paradigm that supports a holistic perspective on autonomous space systems is given by cyber-physical systems. For these, the state of art consists of collaborating multi-engineering efforts that prompt for an adequate formal foundation. To achieve this, we propose a leveraging of the traditional content of formal modeling by a co-engineering process.

  8. Experimental study on a smart wheelchair system using a combination of stereoscopic and spherical vision.

    PubMed

    Nguyen, Jordan S; Su, Steven W; Nguyen, Hung T

    2013-01-01

    This paper is concerned with the experimental study performance of a smart wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses a unique camera configuration for vision. Included in this configuration are stereoscopic cameras for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, and a spherical camera system for 360-degrees of monocular vision. The camera combination provides obstacle detection and mapping in unknown environments during real-time autonomous navigation of the wheelchair. With the integration of hands-free wheelchair control technology, designed as control methods for people with severe physical disability, the smart wheelchair system can assist the user with automated guidance during navigation. An experimental study on this system was conducted with a total of 10 participants, consisting of 8 able-bodied subjects and 2 tetraplegic (C-6 to C-7) subjects. The hands-free control technologies utilized for this testing were a head-movement controller (HMC) and a brain-computer interface (BCI). The results showed the assistance of TIM's automated guidance system had a statistically significant reduction effect (p-value = 0.000533) on the completion times of the obstacle course presented in the experimental study, as compared to the test runs conducted without the assistance of TIM.

  9. Autonomous Systems and Robotics: 2000-2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes technologies to monitor, maintain, and where possible, repair complex space systems. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.

  10. Flight Control System Development for the BURRO Autonomous UAV

    NASA Technical Reports Server (NTRS)

    Colbourne, Jason D.; Frost, Chad R.; Tischler, Mark B.; Ciolani, Luigi; Sahai, Ranjana; Tomoshofski, Chris; LaMontagne, Troy; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    Developing autonomous flying vehicles has been a growing field in aeronautical research within the last decade and will continue into the next century. With concerns about safety, size, and cost of manned aircraft, several autonomous vehicle projects are currently being developed; uninhabited rotorcraft offer solutions to requirements for hover, vertical take-off and landing, as well as slung load transportation capabilities. The newness of the technology requires flight control engineers to question what design approaches, control law architectures, and performance criteria apply to control law development and handling quality evaluation. To help answer these questions, this paper documents the control law design process for Kaman Aerospace BURRO project. This paper will describe the approach taken to design control laws and develop math models which will be used to convert the manned K-MAX into the BURRO autonomous rotorcraft. With the ability of the K-MAX to lift its own weight (6000 lb) the load significantly affects the dynamics of the system; the paper addresses the additional design requirements for slung load autonomous flight. The approach taken in this design was to: 1) generate accurate math models of the K-MAX helicopter with and without slung loads, 2) select design specifications that would deliver good performance as well as satisfy mission criteria, and 3) develop and tune the control system architecture to meet the design specs and mission criteria. An accurate math model was desired for control system development. The Comprehensive Identification from Frequency Responses (CIFER(R)) software package was used to identify a linear math model for unloaded and loaded flight at hover, 50 kts, and 100 kts. The results of an eight degree-of-freedom CIFER(R)-identified linear model for the unloaded hover flight condition are presented herein, and the identification of the two-body slung-load configuration is in progress.

  11. Autonomic nervous system correlates in movement observation and motor imagery

    PubMed Central

    Collet, C.; Di Rienzo, F.; El Hoyek, N.; Guillot, A.

    2013-01-01

    The purpose of the current article is to provide a comprehensive overview of the literature offering a better understanding of the autonomic nervous system (ANS) correlates in motor imagery (MI) and movement observation. These are two high brain functions involving sensori-motor coupling, mediated by memory systems. How observing or mentally rehearsing a movement affect ANS activity has not been extensively investigated. The links between cognitive functions and ANS responses are not so obvious. We will first describe the organization of the ANS whose main purposes are controlling vital functions by maintaining the homeostasis of the organism and providing adaptive responses when changes occur either in the external or internal milieu. We will then review how scientific knowledge evolved, thus integrating recent findings related to ANS functioning, and show how these are linked to mental functions. In turn, we will describe how movement observation or MI may elicit physiological responses at the peripheral level of the autonomic effectors, thus eliciting autonomic correlates to cognitive activity. Key features of this paper are to draw a step-by step progression from the understanding of ANS physiology to its relationships with high mental processes such as movement observation or MI. We will further provide evidence that mental processes are co-programmed both at the somatic and autonomic levels of the central nervous system (CNS). We will thus detail how peripheral physiological responses may be analyzed to provide objective evidence that MI is actually performed. The main perspective is thus to consider that, during movement observation and MI, ANS activity is an objective witness of mental processes. PMID:23908623

  12. On non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2015-04-01

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  13. On non-autonomous dynamical systems

    SciTech Connect

    Anzaldo-Meneses, A.

    2015-04-15

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  14. Global vision systems regulatory and standard setting activities

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo; Münsterer, Thomas

    2016-05-01

    A number of committees globally, and the Regulatory Agencies they support, are active delivering and updating performance standards for vision system: Enhanced, Synthetic and Combined, as they apply to both Fixed Wing and, more recently, Rotorcraft operations in low visibility. We provide an overview of each committee's present and past work, as well as an update of recent activities and future goals.

  15. Digital vision system for three-dimensional model acquisition

    NASA Astrophysics Data System (ADS)

    Yuan, Ta; Lin, Huei-Yung; Qin, Xiangdong; Subbarao, Murali

    2000-10-01

    A digital vision system and the computational algorithms used by the system for three-dimensional (3D) model acquisition are described. The system is named Stonybrook VIsion System (SVIS). The system can acquire the 3D model (which includes the 3D shape and the corresponding image texture) of a simple object within a 300 mm X 300 mm X 300 mm volume placed about 600 mm from the system. SVIS integrates Image Focus Analysis (IFA) and Stereo Image Analysis (SIA) techniques for 3D shape and image texture recovery. First, 4 to 8 partial 3D models of the object are obtained from 4 to 8 views of the object. The partial models are then integrated to obtain a complete model of the object. The complete model is displayed using a 3D graphics rendering software (Apple's QuickDraw). Experimental results on several objects are presented.

  16. Autonomous Control Capabilities for Space Reactor Power Systems

    NASA Astrophysics Data System (ADS)

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-02-01

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission.

  17. Autonomous Control Capabilities for Space Reactor Power Systems

    SciTech Connect

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-02-04

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission.

  18. Enabling autonomous control for space reactor power systems

    SciTech Connect

    Wood, R. T.

    2006-07-01

    The application of nuclear reactors for space power and/or propulsion presents some unique challenges regarding the operations and control of the power system. Terrestrial nuclear reactors employ varying degrees of human control and decision-making for operations and benefit from periodic human interaction for maintenance. In contrast, the control system of a space reactor power system (SRPS) employed for deep space missions must be able to accommodate unattended operations due to communications delays and periods of planetary occlusion while adapting to evolving or degraded conditions with no opportunity for repair or refurbishment. Thus, a SRPS control system must provide for operational autonomy. Oak Ridge National Laboratory (ORNL) has conducted an investigation of the state of the technology for autonomous control to determine the experience base in the nuclear power application domain, both for space and terrestrial use. It was found that control systems with varying levels of autonomy have been employed in robotic, transportation, spacecraft, and manufacturing applications. However, autonomous control has not been implemented for an operating terrestrial nuclear power plant nor has there been any experience beyond automating simple control loops for space reactors. Current automated control technologies for nuclear power plants are reasonably mature, and basic control for a SRPS is clearly feasible under optimum circumstances. However, autonomous control is primarily intended to account for the non optimum circumstances when degradation, failure, and other off-normal events challenge the performance of the reactor and near-term human intervention is not possible. Thus, the development and demonstration of autonomous control capabilities for the specific domain of space nuclear power operations is needed. This paper will discuss the findings of the ORNL study and provide a description of the concept of autonomy, its key characteristics, and a prospective

  19. Building a 3D scanner system based on monocular vision.

    PubMed

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  20. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  1. System safety analysis of an autonomous mobile robot

    SciTech Connect

    Bartos, R.J.

    1994-08-01

    Analysis of the safety of operating and maintaining the Stored Waste Autonomous Mobile Inspector (SWAMI) II in a hazardous environment at the Fernald Environmental Management Project (FEMP) was completed. The SWAMI II is a version of a commercial robot, the HelpMate{trademark} robot produced by the Transitions Research Corporation, which is being updated to incorporate the systems required for inspecting mixed toxic chemical and radioactive waste drums at the FEMP. It also has modified obstacle detection and collision avoidance subsystems. The robot will autonomously travel down the aisles in storage warehouses to record images of containers and collect other data which are transmitted to an inspector at a remote computer terminal. A previous study showed the SWAMI II has economic feasibility. The SWAMI II will more accurately locate radioactive contamination than human inspectors. This thesis includes a System Safety Hazard Analysis and a quantitative Fault Tree Analysis (FTA). The objectives of the analyses are to prevent potentially serious events and to derive a comprehensive set of safety requirements from which the safety of the SWAMI II and other autonomous mobile robots can be evaluated. The Computer-Aided Fault Tree Analysis (CAFTA{copyright}) software is utilized for the FTA. The FTA shows that more than 99% of the safety risk occurs during maintenance, and that when the derived safety requirements are implemented the rate of serious events is reduced to below one event per million operating hours. Training and procedures in SWAMI II operation and maintenance provide an added safety margin. This study will promote the safe use of the SWAMI II and other autonomous mobile robots in the emerging technology of mobile robotic inspection.

  2. Autonomous system for pathogen detection and identification

    SciTech Connect

    Belgrader, P.; Benett, W.; Bergman, W.; Langlois, R.; Mariella, R.; Milanovich, F.; Miles, R.; Venkateswaran, K.; Long, G.; Nelson, W.

    1998-09-24

    This purpose of this project is to build a prototype instrument that will, running unattended, detect, identify, and quantify BW agents. In order to accomplish this, we have chosen to start with the world' s leading, proven, assays for pathogens: surface-molecular recognition assays, such as antibody-based assays, implemented on a high-performance, identification (ID)-capable flow cytometer, and the polymerase chain reaction (PCR) for nucleic-acid based assays. With these assays, we must integrate the capability to: l collect samples from aerosols, water, or surfaces; l perform sample preparation prior to the assays; l incubate the prepared samples, if necessary, for a period of time; l transport the prepared, incubated samples to the assays; l perform the assays; l interpret and report the results of the assays. Issues such as reliability, sensitivity and accuracy, quantity of consumables, maintenance schedule, etc. must be addressed satisfactorily to the end user. The highest possible sensitivity and specificity of the assay must be combined with no false alarms. Today, we have assays that can, in under 30 minutes, detect and identify simulants for BW agents at concentrations of a few hundred colony-forming units per ml of solution. If the bio-aerosol sampler of this system collects 1000 Ymin and concentrates the respirable particles into 1 ml of solution with 70% processing efficiency over a period of 5 minutes, then this translates to a detection/ID capability of under 0.1 agent-containing particle/liter of air.

  3. Autonomous and Autonomic Swarms

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Truszkowski, Walter F.; Rouff, Christopher A.; Sterritt, Roy

    2005-01-01

    A watershed in systems engineering is represented by the advent of swarm-based systems that accomplish missions through cooperative action by a (large) group of autonomous individuals each having simple capabilities and no global knowledge of the group s objective. Such systems, with individuals capable of surviving in hostile environments, pose unprecedented challenges to system developers. Design and testing and verification at much higher levels will be required, together with the corresponding tools, to bring such systems to fruition. Concepts for possible future NASA space exploration missions include autonomous, autonomic swarms. Engineering swarm-based missions begins with understanding autonomy and autonomicity and how to design, test, and verify systems that have those properties and, simultaneously, the capability to accomplish prescribed mission goals. Formal methods-based technologies, both projected and in development, are described in terms of their potential utility to swarm-based system developers.

  4. Brain, mind, body and society: autonomous system in robotics.

    PubMed

    Shimoda, Motomu

    2013-12-01

    In this paper I examine the issues related to the robot with mind. To create a robot with mind aims to recreate neuro function by engineering. The robot with mind is expected not only to process external information by the built-in program and behave accordingly, but also to gain the consciousness activity responding multiple conditions and flexible and interactive communication skills coping with unknown situation. That prospect is based on the development of artificial intelligence in which self-organizing and self-emergent functions have been available in recent years. To date, controllable aspects in robotics have been restricted to data making and programming of cognitive abilities, while consciousness activities and communication skills have been regarded as uncontrollable aspects due to their contingency and uncertainty. However, some researchers of robotics claim that every activity of the mind can be recreated by engineering and is therefore controllable. Based on the development of the cognitive abilities of children and the findings of neuroscience, researchers have attempted to produce the latest artificial intelligence with autonomous learning systems. I conclude that controllability is inconsistent with autonomy in the genuine sense and autonomous robots recreated by engineering cannot be autonomous partners of humans.

  5. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  6. FLILO (flying infrared for low-level operations): an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Guell, Jeff J.

    2000-06-01

    FLILO is an Enhanced Vision System (EVS); which enhances Situational Awareness for safe low level/night time and moderate weather flight operations (including: take- off/landing, taxiing, approaches, drop zone identification, Short Austere Air Field operations, etc), by providing electronic/real time vision to the pilots. It consists of a series of imaging sensors, an Image Processor and a wide field-of-view (FOV) see-through Helmet Mounted Display (HMD) integrated with a Head Tracker. The current solution for safe night time/low level military flight operations is the use of the Turret-FLIR (Forward-Looking InfraRed). This system requires an additional operator/crew member (navigator) who controls the Turret's movement and relays the information to the pilots. The image is presented on a Head-Down-Display. FLILO presents the information directly to the pilots on an HMD, therefore each pilot has an independent view controlled by their heads position, while utilizing the same sensors that are static and fixed to the aircraft structure. Since there are no moving parts, the system provides high reliability, while remaining more affordable than the Turret-FLIR solution. FLILO does not require a ball-turret, therefore there is no extra drag or range impact on the aircraft's performance. Furthermore, with future use of real-time multi-band/multi-sensor image fusion, FLILO is the right step towards obtaining safe autonomous landing guidance/0-0 flight operations capability.

  7. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  8. A machine vision system for the calibration of digital thermometers

    NASA Astrophysics Data System (ADS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Martín, Fernando; Formella, Arno; Alvarez-Valado, Victor

    2009-06-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians.

  9. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  10. The organizing vision of integrated health information systems.

    PubMed

    Ellingsen, Gunnar; Monteiro, Eric

    2008-09-01

    The notion of 'integration' in the context of health information systems is ill-defined yet in widespread use. We identify a variety of meanings ranging from the purely technical integration of information systems to the integration of services. This ambiguity (or interpretive flexibility), we argue, is inherent rather than accidental: it is a necessary prerequisite for mobilizing political and ideological support among stakeholders for integrated health information systems. Building on this, our aim is to trace out the career dynamics of the vision of 'integration/ integrated'. The career dynamics is the transformation of both the imaginary and the material (technological) realizations of the unfolding implementation of the vision of integrated care. Empirically we draw on a large, ongoing project at the University Hospital of North Norway (UNN) to establish an integrated health information system.

  11. Hepatic Control of Energy Metabolism via the Autonomic Nervous System

    PubMed Central

    2017-01-01

    Although the human liver comprises approximately 2.8% of the body weight, it plays a central role in the control of energy metabolism. While the biochemistry of energy substrates such as glucose, fatty acids, and ketone bodies in the liver is well understood, many aspects of the overall control system for hepatic metabolism remain largely unknown. These include mechanisms underlying the ascertainment of its energy metabolism status by the liver, and the way in which this information is used to communicate and function together with adipose tissues and other organs involved in energy metabolism. This review article summarizes hepatic control of energy metabolism via the autonomic nervous system. PMID:27592630

  12. Component-Oriented Behavior Extraction for Autonomic System Design

    NASA Technical Reports Server (NTRS)

    Bakera, Marco; Wagner, Christian; Margaria,Tiziana; Hinchey, Mike; Vassev, Emil; Steffen, Bernhard

    2009-01-01

    Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate properties of the ESA Exo- Mars Rover in depth. We show here how to enable GEAR s game-based verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission.

  13. Using Robotic Operating System (ROS) to control autonomous observatories

    NASA Astrophysics Data System (ADS)

    Vilardell, Francesc; Artigues, Gabriel; Sanz, Josep; García-Piquer, Álvaro; Colomé, Josep; Ribas, Ignasi

    2016-07-01

    Astronomical observatories are complex systems requiring the integration of numerous devices into a common platform. We are presenting here the firsts steps to integrate the popular Robotic Operating System (ROS) into the control of a fully autonomous observatory. The observatory is also equipped with a decision-making procedure that can automatically react to a changing environment (like weather events). The results obtained so far have shown that the automation of a small observatory can be greatly simplified when using ROS, as well as robust, with the implementation of our decision-making algorithms.

  14. TOPEX/Poseidon electrical power system -- Autonomous operation

    SciTech Connect

    Chetty, P.R.K.; Richardson, R.; Sherwood, R.; Deligiannis, F.

    1996-12-31

    The main objective of the TOPEX/Poseidon Satellite is to monitor the world`s oceans for scientific study of weather and climate prediction, coastal storm warning and maritime safety. The operational conditions of this satellite imposed challenging requirements for the on-board Electrical Power System (EPS). The power system is designed to maintain a certain level of autonomy. This paper presents the autonomous operations planned, their on-orbit performance and how some of the operations were modified as certain unpredictable circumstances were discovered.

  15. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  16. A novel container truck locating system based on vision technology

    NASA Astrophysics Data System (ADS)

    He, Junji; Shi, Li; Mi, Weijian

    2008-10-01

    On a container dock, the container truck must be parked right under the trolley of the container crane before loading (unloading) a container to (from) it. But it often uses nearly one minute to park the truck at the right position because of the difficulty of aiming the truck at the trolley. A monocular machine vision system is designed to locate the locomotive container truck, give the information about how long the truck need to go ahead or go back, and thereby help the driver park the truck fleetly and correctly. With this system time is saved and the efficiency of loading and unloading is increased. The mathematical model of this system is presented in detail. Then the calibration method is described. At last the experiment result testifies the validity and precision of this locating system. The prominent characteristic of this system is simple, easy to be implemented, low cost, and effective. Furthermore, this research work verifies that a monocular vision system can detect 3D size on condition that the length and width of a container are known, which greatly extends the function and application of a monocular vision system.

  17. Systems, methods and apparatus for quiesence of autonomic safety devices with self action

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments an autonomic environmental safety device may be quiesced. In at least one embodiment, a method for managing an autonomic safety device, such as a smoke detector, based on functioning state and operating status of the autonomic safety device includes processing received signals from the autonomic safety device to obtain an analysis of the condition of the autonomic safety device, generating one or more stay-awake signals based on the functioning status and the operating state of the autonomic safety device, transmitting the stay-awake signal, transmitting self health/urgency data, and transmitting environment health/urgency data. A quiesce component of an autonomic safety device can render the autonomic safety device inactive for a specific amount of time or until a challenging situation has passed.

  18. Development Of An Aviator's Night Vision Imaging System (ANVIS)

    NASA Astrophysics Data System (ADS)

    Efkernan, Albert; Jenkins, Donald

    1981-04-01

    Historical background is presented of the U. S. Army's requirement for a high performance, lightweight, night vision goggle for use by helicopter pilots. System requirements are outlined and a current program for development of a third generation image intensification device is described. Primary emphasis is on the use of lightweight, precision molded, aspheric plastic optical elements and molded plastic mechanical components. System concept, design, and manufacturing considerations are presented.

  19. Development Of An Aviator's Night Vision Imaging System (ANVIS)

    NASA Astrophysics Data System (ADS)

    Jenkins, Donald; Efkeman, Albert

    1980-10-01

    Historical background is presented of the U.S. Army's requirement for a high performance, lightweight, night vision goggle for use by helicopter pilots. System requirements are outlined and a current program for development of a third generation image intensification device is described. Primary emphasis is on the use of light precision molded, aspheric plastic optical elements and molded plastic mechanical components. System concept, design, and manufacturing considerations are presented.

  20. Knowledge-based and integrated monitoring and diagnosis in autonomous power systems

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A new technique of knowledge-based and integrated monitoring and diagnosis (KBIMD) to deal with abnormalities and incipient or potential failures in autonomous power systems is presented. The KBIMD conception is discussed as a new function of autonomous power system automation. Available diagnostic modelling, system structure, principles and strategies are suggested. In order to verify the feasibility of the KBIMD, a preliminary prototype expert system is designed to simulate the KBIMD function in a main electric network of the autonomous power system.

  1. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  2. Vision development test bed: The cradle of the MSS artificial vision system

    NASA Astrophysics Data System (ADS)

    Zucherman, Leon; Stovman, John

    This paper presents the concept of the Vision Development Test-Bed (VDTB) developed at Spar Aerospace Ltd. in order to assist development work on the Artificial Vision System (AVS) for the Mobile Servicing System (MSS) of Space Station Freedom in providing reliable and robust target auto acquisition and robotic auto-tracking capabilities when operating in the extremely contrasty illumination of the space environment. The paper illustrates how the VDTB will be used to understand the problems and to evaluate the methods of solving them. The VDTB is based on the use of conventional but high speed image processing hardware and software. Auxiliary equipment, such as TV cameras, illumination sources, monitors, will be added to provide completeness and flexibility. A special feature will be the use of solar simulation so that the impact of the harsh illumination conditions in space on image quality can be evaluated. The VDTB will be used to assess the required techniques, algorithms, hardware and software characteristics, and to utilize this information in overcoming the target-recognition and false-target rejection problems. The problems associated with NTSC video processing and the use of color will also be investigated. The paper concludes with a review of applications for the VDTB work, such as AVS real-time simulations, application software development, evaluations, and trade-offs studies.

  3. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    PubMed Central

    2010-01-01

    Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic

  4. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published

  5. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  6. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  7. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    SciTech Connect

    Pin, F.G.; de Saussure, G.; Spelt, P.F.; Killough, S.M.; Weisbin, C.R.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioning of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.

  8. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  9. Measuring cardiac autonomic nervous system (ANS) activity in children.

    PubMed

    van Dijk, Aimée E; van Lien, René; van Eijsden, Manon; Gemke, Reinoud J B J; Vrijkotte, Tanja G M; de Geus, Eco J

    2013-04-29

    The autonomic nervous system (ANS) controls mainly automatic bodily functions that are engaged in homeostasis, like heart rate, digestion, respiratory rate, salivation, perspiration and renal function. The ANS has two main branches: the sympathetic nervous system, preparing the human body for action in times of danger and stress, and the parasympathetic nervous system, which regulates the resting state of the body. ANS activity can be measured invasively, for instance by radiotracer techniques or microelectrode recording from superficial nerves, or it can be measured non-invasively by using changes in an organ's response as a proxy for changes in ANS activity, for instance of the sweat glands or the heart. Invasive measurements have the highest validity but are very poorly feasible in large scale samples where non-invasive measures are the preferred approach. Autonomic effects on the heart can be reliably quantified by the recording of the electrocardiogram (ECG) in combination with the impedance cardiogram (ICG), which reflects the changes in thorax impedance in response to respiration and the ejection of blood from the ventricle into the aorta. From the respiration and ECG signals, respiratory sinus arrhythmia can be extracted as a measure of cardiac parasympathetic control. From the ECG and the left ventricular ejection signals, the preejection period can be extracted as a measure of cardiac sympathetic control. ECG and ICG recording is mostly done in laboratory settings. However, having the subjects report to a laboratory greatly reduces ecological validity, is not always doable in large scale epidemiological studies, and can be intimidating for young children. An ambulatory device for ECG and ICG simultaneously resolves these three problems. Here, we present a study design for a minimally invasive and rapid assessment of cardiac autonomic control in children, using a validated ambulatory device (1-5), the VU University Ambulatory Monitoring System (VU

  10. Distributed autonomous systems: resource management, planning, and control algorithms

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Nguyen, ThanhVu H.

    2005-05-01

    Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.

  11. Image processing in an enhanced and synthetic vision system

    NASA Astrophysics Data System (ADS)

    Mueller, Rupert M.; Palubinskas, Gintautas; Gemperlein, Hans

    2002-07-01

    'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.

  12. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  13. Development of a distributed vision system for industrial conditions

    NASA Astrophysics Data System (ADS)

    Weiss, Michael; Schiller, Arnulf; O'Leary, Paul; Fauster, Ewald; Schalk, Peter

    2003-04-01

    This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO"s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.

  14. Autonomic nervous system testing may not distinguish multiple system atrophy from Parkinson's disease

    PubMed Central

    Riley, D; Chelimsky, T

    2003-01-01

    Background: Formal laboratory testing of autonomic function is reported to distinguish between patients with Parkinson's disease and those with multiple system atrophy (MSA), but such studies segregate patients according to clinical criteria that select those with autonomic dysfunction for the MSA category. Objective: To characterise the profiles of autonomic disturbances in patients in whom the diagnosis of Parkinson's disease or MSA used criteria other than autonomic dysfunction. Methods: 47 patients with parkinsonism and autonomic symptoms who had undergone autonomic laboratory testing were identified and their case records reviewed for non-autonomic features. They were classified clinically into three diagnostic groups: Parkinson's disease (19), MSA (14), and uncertain (14). The performance of the patients with Parkinson's disease was compared with that of the MSA patients on five autonomic tests: RR variation on deep breathing, heart rate changes with the Valsalva manoeuvre, tilt table testing, the sudomotor axon reflex test, and thermoregulatory sweat testing. Results: None of the tests distinguished one group from the other with any statistical significance, alone or in combination. Parkinson's disease and MSA patients showed similar patterns of autonomic dysfunction on formal testing of cardiac sympathetic and parasympathetic, vasomotor, and central and peripheral sudomotor functions. Conclusions: This study supports the clinical observation that Parkinson's disease is often indistinguishable from MSA when it involves the autonomic nervous system. The clinical combination of parkinsonism and dysautonomia is as likely to be caused by Parkinson's disease as by MSA. Current clinical criteria for Parkinson's disease and MSA that direct patients with dysautonomia into the MSA group may be inappropriate. PMID:12486267

  15. HMD digital night vision system for fixed wing fighters

    NASA Astrophysics Data System (ADS)

    Foote, Bobby D.

    2013-05-01

    Digital night sensor technology offers both advantages and disadvantages over standard analog systems. As the digital night sensor technology matures and disadvantages are overcome, the transition away from analog type sensors will increase with new programs. In response to this growing need RCEVS is actively investing in digital night vision systems that will provide the performance needed for the future. Rockwell Collins and Elbit Systems of America continue to invest in digital night technology and have completed laboratory, ground and preliminary flight testing to evaluate the important key factors for night vision. These evaluations have led to a summary of the maturity of the digital night capability and status of the key performance gap between analog and digital systems. Introduction of Digital Night Vision Systems can be found in the roadmap of future fixed wing and rotorcraft programs beginning in 2015. This will bring a new set of capabilities to the pilot that will enhance his abilities to perform night operations with no loss of performance.

  16. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    SciTech Connect

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  17. The Systemic Vision of the Educational Learning

    ERIC Educational Resources Information Center

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  18. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  19. Autonomous Operation of the Nanosatellite URSA MAIOR Micropropulsion System

    NASA Astrophysics Data System (ADS)

    Santoni, F.

    Università degli Studi di Roma "La Sapienza", Scuola di Ingegneria Aerospaziale, Via Eudossiana 16, 00184 At Università di Roma "La Sapienza" a nanosatellite bus is under development, with one liter target volume and one kilogram target weight. This nanosatellite, called URSA MAIOR (Università di Roma "la SApienza" Micro Autonomous Imager in ORbit) has a micro camera on board to take pictures of the Earth. The nanosatellite is three axis stabilized, using a micro momentum wheel, with magnetic coils for active nutation damping and pointing control. An experimental micropropulsion system is present on-board, together with the magnetic attitude control system. The design, construction and testing of the satellite is carried on by academic personnel and by students, which are directly involved in the whole process, as it is in the spirit of in the microsatellite program at Università di Roma "La Sapienza". Few technological payloads are present on-board: an Earth imaging experiment, using a few grams commercial-off-the-shelf microcamera; commercial Li-Ion batteries are the only energy storage device; a microwheel developed at our University laboratories provides for attitude stabilization. In addition, a micropropulsion experiment is planned on-board. The Austrian Company Mechatronic, and INFM, an Italian Research Institute at Trieste are developing a microthruster for nanosatelite applications. In the frame of a cooperation established between these two Institutions and Università di Roma "La Sapienza", this newly developed hardware will be tested in orbit. The thruster is made basically of an integrated microvalve, built on a silicon chip, and a micronozzle, etched on the same silicon chip, to get supersonic expansion of the gas flow. The nominal thrust of the system is about 100microN. The throat section is about 100 micron diameter. The first phase in the construction of the microthruster has been the construction of the micronozzle on a silicon chip. A

  20. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  1. Transport Device Driver's Assistance Vision Systems

    NASA Astrophysics Data System (ADS)

    Szpytko, Janusz; Gbyl, Michał

    2011-03-01

    The purpose of this paper is to review solutions whose task is to actively correct decision-making processes of the vehicle's driver on the basis of information obtained from the surroundings and the presentation of a tool that makes it possible to react to the changes of the psychophysical condition of the driver. The system is implemented by the Matlab application environment on the basis on the image activated by a webcam.

  2. G2 Autonomous Control for Cryogenic Delivery Systems

    NASA Technical Reports Server (NTRS)

    Dito, Scott J.

    2014-01-01

    The Independent System Health Management-Autonomous Control (ISHM-AC) application development for cryogenic delivery systems is intended to create an expert system that will require minimal operator involvement and ultimately allow for complete autonomy when fueling a space vehicle in the time prior to launch. The G2-Autonomous Control project is the development of a model, simulation, and ultimately a working application that will control and monitor the cryogenic fluid delivery to a rocket for testing purposes. To develop this application, the project is using the programming language/environment Gensym G2. The environment is an all-inclusive application that allows development, testing, modeling, and finally operation of the unique application through graphical and programmatic methods. We have learned G2 through training classes and subsequent application development, and are now in the process of building the application that will soon be used to test on cryogenic loading equipment here at the Kennedy Space Center Cryogenics Test Laboratory (CTL). The G2 ISHM-AC application will bring with it a safer and more efficient propellant loading system for the future launches at Kennedy Space Center and eventually mobile launches from all over the world.

  3. The Spacecraft Emergency Response System (SERS) for Autonomous Mission Operations

    NASA Technical Reports Server (NTRS)

    Breed, Julia; Chu, Kai-Dee; Baker, Paul; Starr, Cynthia; Fox, Jeffrey; Baitinger, Mick

    1998-01-01

    Today, most mission operations are geared toward lowering cost through unmanned operations. 7-day/24-hour operations are reduced to either 5-day/8-hour operations or become totally autonomous, especially for deep-space missions. Proper and effective notification during a spacecraft emergency could mean success or failure for an entire mission. The Spacecraft Emergency Response System (SERS) is a tool designed for autonomous mission operations. The SERS automatically contacts on-call personnel as needed when crises occur, either on-board the spacecraft or within the automated ground systems. Plus, the SERS provides a group-ware solution to facilitate the work of the person(s) contacted. The SERS is independent of the spacecraft's automated ground system. It receives and catalogues reports for various ground system components in near real-time. Then, based on easily configurable parameters, the SERS determines whom, if anyone, should be alerted. Alerts may be issued via Sky-Tel 2-way pager, Telehony, or e-mail. The alerted personnel can then review and respond to the spacecraft anomalies through the Netscape Internet Web Browser, or directly review and respond from the Sky-Tel 2-way pager.

  4. Autonomous Pathogen Detection System - FY02 Annual Progress Report

    SciTech Connect

    Colston, B; Brown, S; Burris, K; Elkin, C; Hindson, B; Langlois, R; Masquelier, D; McBride, M; Metz, T; Nasarabadi, S; Makarewicz, T; Milznovich, F; Venkateswaran, K S; Visuri, S

    2002-11-11

    The objective of this project is to design, fabricate and field demonstrate a biological agent detection and identification capability, the Autonomous Pathogen Detector System (APDS). Integrating a flow cytometer and real-time polymerase chain reaction (PCR) detector with sample collection, sample preparation and fluidics will provide a compact, autonomously operating instrument capable of simultaneously detecting multiple pathogens and/or toxins. The APDS will operate in fixed locations, continuously monitoring air samples and automatically reporting the presence of specific biological agents. The APDS will utilize both multiplex immunoassays and nucleic acid assays to provide ''quasi-orthogonal'' multiple agent detection approaches to minimize false positives and increase the reliability of identification. Technical advances across several fronts must occur, however, to realize the full extent of the APDS. The end goal of a commercially available system for civilian biological weapon defense will be accomplished through three progressive generations of APDS instruments. The APDS is targeted for civilian applications in which the public is at high risk of exposure to covert releases of bioagent, such as major subway systems and other transportation terminals, large office complexes and convention centers. APDS is also designed to be part of a monitoring network of sensors integrated with command and control systems for wide-area monitoring of urban areas and major public gatherings. In this latter application there is potential that a fully developed APDS could add value to DoD monitoring architectures.

  5. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  6. Comparison between thermodynamic work and heat in autonomous quantum systems.

    PubMed

    Xu, Y Y

    2016-12-01

    One of the most important problems in quantum thermodynamics is how to distinguish work and heat in autonomous quantum systems. In this paper, work and heat are defined through the following criterion, i.e., work is the energy that cannot change the entropy of the energy resource, and satisfies the Jarzynski equality, while heat does not. Two kinds of definitions satisfying the two corresponding requirements are proposed and demonstrated, and the consistency condition of the two kinds is given. Through the first definition, the problem of entropy production is investigated. A model study is also presented to verify the proposal.

  7. Analysis of water in Autonomous Biological Systems (ABS) samples.

    PubMed

    Ishikawa, Y; Kobayashi, K; Seki, K; Mizutani, H; Kawasaki, Y; Koike, J; Ijiri, K; Yamashita, M; Sugiura, K; Poynter, J; MacCallum, T; Anderson, G

    1998-12-01

    Several soluble components, peptidase and amino acids, and carbon isotopic ratio in the water retrieved from flight experiments of Autonomous Biological Systems (ABS) as well as ground control samples are analyzed to interpret the condition, dynamics, material balance of the ABS ecosystems. Organic carbons in flight samples were found to be more abundant compared with the control ones, which suggested the uniform ecosystems in low gravity might easily dissolve more soluble components. The Mir-1997 flight sample showed higher C/N ratio probably because of the dissolution of carbon-rich plant materials.

  8. Autonomous, teleoperated, and shared control of robot systems

    SciTech Connect

    Anderson, R.J.

    1994-12-31

    This paper illustrates how different modes of operation such as bilateral teleoperation, autonomous control, and shared control can be described and implemented using combinations of modules in the SMART robot control architecture. Telerobotics modes are characterized by different ``grids`` of SMART icons, where each icon represents a portion of run-time code that implements a passive control law. By placing strict requirements on the module`s input-output behavior and using scattering theory to develop a passive sampling technique, a flexible, expandable telerobot architecture is achieved. An automatic code generation tool for generating SMART systems is also described.

  9. Comparison between thermodynamic work and heat in autonomous quantum systems

    NASA Astrophysics Data System (ADS)

    Xu, Y. Y.

    2016-12-01

    One of the most important problems in quantum thermodynamics is how to distinguish work and heat in autonomous quantum systems. In this paper, work and heat are defined through the following criterion, i.e., work is the energy that cannot change the entropy of the energy resource, and satisfies the Jarzynski equality, while heat does not. Two kinds of definitions satisfying the two corresponding requirements are proposed and demonstrated, and the consistency condition of the two kinds is given. Through the first definition, the problem of entropy production is investigated. A model study is also presented to verify the proposal.

  10. Extracting depth by binocular stereo in a robot vision system

    SciTech Connect

    Marapane, S.B.; Trivedi, M.M.

    1988-01-01

    New generation of robotic systems will operate in complex, unstructured environments utilizing sophisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such as: object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo technique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach. 16 refs., 1 fig.

  11. Concluding remarks of Autonomous Biological Systems (ABS) experiments.

    PubMed

    Ishikawa, Y; Kobayashi, K; Mizutani, H; Kawasaki, Y; Koike, J; Ijiri, K; Yamashita, M; Sugiura, K; Poynter, J; MacCallum, T; Anderson, G

    1998-12-01

    Team efforts for analysis on the Autonomous Biological Systems (ABS) space experiments are summarized here to conclude scientific findings, and to scope the extended studies in future. From the three experiments on Space Shuttle and Space Station Mir, a closed ecological system modeled by the ABS was verified to be capable of sustaining its members of animals and plants under space environment for a period of several months. The animals successfully completed their life cycle in space during the course of these experiments, this was the first time that the life cycle of higher organisms had been completed in space and ecological system. Importance of gravity for ecology was proven at the same time. Gravity is a dominant factor for ecology by formulating spatial patterns and distribution of members of ecological system. Under microgravity, the fate of ecological system was found highly sensitive against the variation of environmental factor, such as light illumination cycle.

  12. Design and realization of an autonomous solar system

    NASA Astrophysics Data System (ADS)

    Gaga, A.; Diouri, O.; Es-sbai, N.; Errahimi, F.

    2017-03-01

    The aim of this work is the design and realization of an autonomous solar system, with MPPT control, a regulator charge/discharge of batteries, an H-bridge multi-level inverter with acquisition system and supervising based on a microcontroller. The proposed approach is based on developing a software platform in the LabVIEW environment which gives the system a flexible structure for controlling, monitoring and supervising the whole system in real time while providing power maximization and best quality of energy conversion from DC to AC power. The reliability of the proposed solar system is validated by the simulation results on PowerSim and experimental results achieved with a solar panel, a Lead acid battery, solar regulator and an H-bridge cascaded topology of single-phase inverter.

  13. A vision system for a Mars rover

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1988-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  14. A VISION of Advanced Nuclear System Cost Uncertainty

    SciTech Connect

    J'Tia Taylor; David E. Shropshire; Jacob J. Jacobson

    2008-08-01

    VISION (VerifIable fuel cycle SImulatiON) is the Advanced Fuel Cycle Initiative’s and Global Nuclear Energy Partnership Program’s nuclear fuel cycle systems code designed to simulate the US commercial reactor fleet. The code is a dynamic stock and flow model that tracks the mass of materials at the isotopic level through the entire nuclear fuel cycle. As VISION is run, it calculates the decay of 70 isotopes including uranium, plutonium, minor actinides, and fission products. VISION.ECON is a sub-model of VISION that was developed to estimate fuel cycle and reactor costs. The sub-model uses the mass flows generated by VISION for each of the fuel cycle functions (referred to as modules) and calculates the annual cost based on cost distributions provided by the Advanced Fuel Cycle Cost Basis Report1. Costs are aggregated for each fuel cycle module, and the modules are aggregated into front end, back end, recycling, reactor, and total fuel cycle costs. The software also has the capability to perform system sensitivity analysis. This capability may be used to analyze the impacts on costs due to system uncertainty effects. This paper will provide a preliminary evaluation of the cost uncertainty affects attributable to 1) key reactor and fuel cycle system parameters and 2) scheduling variations. The evaluation will focus on the uncertainty on the total cost of electricity and fuel cycle costs. First, a single light water reactor (LWR) using mixed oxide fuel is examined to ascertain the effects of simple parameter changes. Three system parameters; burnup, capacity factor and reactor power are varied from nominal cost values and the affect on the total cost of electricity is measured. These simple parameter changes are measured in more complex scenarios 2-tier systems including LWRs with mixed fuel and fast recycling reactors using transuranic fuel. Other system parameters are evaluated and results will be presented in the paper. Secondly, the uncertainty due to

  15. Establishing an evoked-potential vision-tracking system

    NASA Technical Reports Server (NTRS)

    Skidmore, Trent A.

    1991-01-01

    This paper presents experimental evidence to support the feasibility of an evoked-potential vision-tracking system. The topics discussed are stimulator construction, verification of the photic driving response in the electroencephalogram, a method for performing frequency separation, and a transient-analysis example. The final issue considered is that of object multiplicity (concurrent visual stimuli with different flashing rates). The paper concludes by discussing several applications currently under investigation.

  16. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  17. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  18. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  19. Non-autonomous lattice systems with switching effects and delayed recovery

    NASA Astrophysics Data System (ADS)

    Han, Xiaoying; Kloeden, Peter E.

    2016-09-01

    The long term behavior of a type of non-autonomous lattice dynamical systems is investigated, where these have a diffusive nearest neighborhood interaction and discontinuous reaction terms with recoverable delays. This problem is of both biological and mathematical interests, due to its application in systems of excitable cells as well as general biological systems involving delayed recovery. The problem is formulated as an evolution inclusion with delays and the existence of weak and strong solutions is established. It is then shown that the solutions generate a set-valued non-autonomous dynamical system and that this non-autonomous dynamical system possesses a non-autonomous global pullback attractor.

  20. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  1. Autonomously acquiring declarative and procedural knowledge for ICAT systems

    NASA Technical Reports Server (NTRS)

    Kovarik, Vincent J., Jr.

    1993-01-01

    The construction of Intelligent Computer Aided Training (ICAT) systems is critically dependent on the ability to define and encode knowledge. This knowledge engineering effort can be broadly divided into two categories: domain knowledge and expert or task knowledge. Domain knowledge refers to the physical environment or system with which the expert interacts. Expert knowledge consists of the set of procedures and heuristics employed by the expert in performing their task. Both these areas are a significant bottleneck in the acquisition of knowledge for ICAT systems. This paper presents a research project in the area of autonomous knowledge acquisition using a passive observation concept. The system observes an expert and then generalizes the observations into production rules representing the domain expert's knowledge.

  2. Reliability assessment of autonomous power systems incorporating HVDC interconnection links

    SciTech Connect

    Dialynas, E.N.; Koskolos, N.C.; Agoris, D.

    1996-01-01

    The objective of this paper is to present an improved computational method for the overall reliability assessment of autonomous power systems that may or may not contain HVdc interconnection links. This is a hybrid method based on a Monte-Carlo simulation sequential approach which incorporates an analytical approach for the reliability modeling of the HVdc transmission links. The developed models and techniques have been implemented into a computer program that can be used to simulate the operational practices and characteristics of the overall system under study efficiently and realistically. A set of reliability indices are calculated for each load-point of interest and the entire system while a set of additional indices is calculated for quantifying the reliability performance of the interconnection links under the specified operating requirements. The analysis of a practical system is also included for a number of studies representing its various operating and design characteristics.

  3. Method and system for providing autonomous control of a platform

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)

    2012-01-01

    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).

  4. Context-Based Intent Understanding for Autonomous Systems in Naval and Collaborative Robot Applications

    DTIC Science & Technology

    2013-10-29

    29/2013 Final Report 8/1/2009 to 7/31/2013 Context-Based Intent Understanding for Autonomous Systems in Naval and Collaborative Robot Applications...are very good at recognizing intentions, endowing an autonomous system ( robot or simulated agent) with similar skills is a more complex problem, which...understanding, with specific focus on autonomous systems for naval and collaborative robotics applications. The main research problems we will address in

  5. Sensing, Control, and System Integration for Autonomous Vehicles: A Series of Challenges

    NASA Astrophysics Data System (ADS)

    Özgüner, Ümit; Redmill, Keith

    One of the important examples of mechatronic systems can be found in autonomous ground vehicles. Autonomous ground vehicles provide a series of challenges in sensing, control and system integration. In this paper we consider off-road autonomous vehicles, automated highway systems and urban autonomous driving and indicate the unifying aspects. We specifically consider our own experience during the last twelve years in various demonstrations and challenges in attempting to identify unifying themes. Such unifying themes can be observed in basic hierarchies, hybrid system control approaches and sensor fusion techniques.

  6. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  7. Computer vision in roadway transportation systems: a survey

    NASA Astrophysics Data System (ADS)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  8. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  9. Distributed multisensor blackboard system for an autonomous robot

    NASA Astrophysics Data System (ADS)

    Kappey, Dietmar; Pokrandt, Peter; Schloen, Jan

    1994-10-01

    Sensoric data enable a robotic system to react to events occurring in its environment. Much work has been done on the development of various sensors and algorithms to extract information from an environment. On the other hand, only little work has been done in the field of multisensor communication. This paper presents a shared memory based communication protocol that has been developed for the autonomous robot system KAMRO. This system consists of two PUMA 260 manipulators and an omnidirectionally driven mobile platform. The proposed approach is based on logical sensors, which can be used to dynamically build hierarchical sensor units. The protocol uses a distributed blackboard structure for the transmission of sensor data and commands. To support asynchronous coupling of robots and sensors, it not only transfers single sensor values, but also offers functions to estimate future values.

  10. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  11. An Adaptive Localization System for Outdoor/Indoor Navigation for Autonomous Robots

    DTIC Science & Technology

    2006-04-01

    An Adaptive Localization System for Outdoor/Indoor Navigation for Autonomous Robots E.B. Pacis, B. Sights, G. Ahuja, G. Kogut, H.R. Everett...TITLE AND SUBTITLE An Adaptive Localization System for Outdoor/Indoor Navigation for Autonomous Robots 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...demonstrated a series of collaborative behaviors of multiple autonomous robots in a force-protection scenario. Stand- alone sensors detected intruder

  12. Gyro and Accelerometer Based Navigation System for a Mobile Autonomous Robot.

    DTIC Science & Technology

    1985-12-02

    8217[ C) ~OF ~ FEB 13 1986 J GYRO AND ACCELEROMETER BASED NAVIGATION SYSTEM FOR A MOBILE AUTONOMOUS ROBOT Roland J. Bloom William J. Ramey, Jr. Captain...ACCELEROMETER BASED NAVIGATION SYSTEM FOR A MOBILE AUTONOMOUS ROBOT THESIS Roland J. Bloom William J. Ramey, Jr. Captain, USAF Captain, USAF AF IT/GA/GE/ENG/85D...MOBILE AUTONOMOUS ROBOT THE SI S Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In

  13. Autonomous Flight Safety System September 27, 2005, Aircraft Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.

    2005-01-01

    This report describes the first aircraft test of the Autonomous Flight Safety System (AFSS). The test was conducted on September 27, 2005, near Kennedy Space Center (KSC) using a privately-owned single-engine plane and evaluated the performance of several basic flight safety rules using real-time data onboard a moving aerial vehicle. This test follows the first road test of AFSS conducted in February 2005 at KSC. AFSS is a joint KSC and Wallops Flight Facility (WEF) project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations. The mission rules are configured for each operation by the responsible Range Safety authorities and can be loosely categorized in four major categories: Parameter Threshold Violations, Physical Boundary Violations present position and instantaneous impact point (TIP), Gate Rules static and dynamic, and a Green-Time Rule. Examples of each of these rules were evaluated during this aircraft test.

  14. JOMAR: Joint Operations with Mobile Autonomous Robots

    DTIC Science & Technology

    2015-12-21

    improvements in GPS- aided navigation. * A data-association algorithm with applications to target tracking and computer vision applications, named the...A characterization of Global Positioning System (GPS) noise models in the MaxMixture framework, allowing significant improvements in GPS- aided ...autonomous tractor operations,” Autonomous Robots, vol. 13, no. 1, pp. 87–104, 2002. [11] J. Kim and S. Sukkarieh, “SLAM aided GPS/INS navigation in GPS

  15. Computer-vision-based inspecting system for needle roller bearing

    NASA Astrophysics Data System (ADS)

    Li, Wei; He, Tao; Zhong, Fei; Wu, Qinhua; Zhong, Yuning; Shi, Teiling

    2006-11-01

    A Computer Vision based Inspecting System for Needle Roller Bearing (CVISNRB) is proposed in the paper. The characteristic of technology, main functions and principle of CVISNRB are also introduced. CVISNRB is composed of a mechanic transmission and an automatic feeding system, an imaging system, software arithmetic, an automatic selecting system of inspected bearing, a human-computer interaction, a pneumatic control system, an electric control system and so on. The computer vision technique is introduced in the inspecting system for needle roller bearing, which resolves the problem of the small needle roller bearing inspecting in bearing production business enterprise, raises the speed of the inspecting, and realizes the automatic untouched and on-line examination. The CVISNRB can effectively examine the loss of needle and give the accurate number. The accuracy can achieve 99.5%, and the examination speed can arrive 15 needle roller bearings each minute. The CVISNRB has none malfunction in the actual performance in the past half year, and can meet the actual need.

  16. Users' subjective evaluation of electronic vision enhancement systems.

    PubMed

    Culham, Louise E; Chabra, Anthony; Rubin, Gary S

    2009-03-01

    The aims of this study were (1) to elicit the users' responses to four electronic head-mounted devices (Jordy, Flipperport, Maxport and NuVision) and (2) to correlate users' opinion with performance. Ten patients with early onset macular disease (EOMD) and 10 with age-related macular disease (AMD) used these electronic vision enhancement systems (EVESs) for a variety of visual tasks. A questionnaire designed in-house and a modified VF-14 were used to evaluate the responses. Following initial experience of the devices in the laboratory, every patient took home two of the four devices for 1 week each. Responses were re-evaluated after this period of home loan. No single EVES stood out as the strong preference for all aspects evaluated. In the laboratory-based appraisal, Flipperport typically received the best overall ratings and highest score for image quality and ability to magnify, but after home loan there was no significant difference between devices. Comfort of device, although important, was not predictive of rating once magnification had been taken into account. For actual performance, a threshold effect was seen whereby ratings increased as reading speed improved up to 60 words per minute. Newly diagnosed patients responded most positively to EVESs, but otherwise users' opinion could not be predicted by age, gender, diagnosis or previous CCTV experience. User feedback is essential in our quest to understand the benefits and shortcoming of EVESs. Such information should help guide both prescribing and future development of low vision devices.

  17. Low Temperature Shape Memory Alloys for Adaptive, Autonomous Systems Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Williams, Martha; Benafan, Othmane; Fesmire, James

    2015-01-01

    The objective of this joint activity between Kennedy Space Center (KSC) and Glenn Research Center (GRC) is to develop and evaluate the applicability of 2-way SMAs in proof-of-concept, low-temperature adaptive autonomous systems. As part of this low technology readiness (TRL) activity, we will develop and train low-temperature novel, 2-way shape memory alloys (SMAs) with actuation temperatures ranging from 0 C to 150 C. These experimental alloys will also be preliminary tested to evaluate their performance parameters and transformation (actuation) temperatures in low- temperature or cryogenic adaptive proof-of-concept systems. The challenge will be in the development, design, and training of the alloys for 2-way actuation at those temperatures.

  18. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  19. Looking inside the Ocean: Toward an Autonomous Imaging System for Monitoring Gelatinous Zooplankton

    PubMed Central

    Corgnati, Lorenzo; Marini, Simone; Mazzei, Luca; Ottaviani, Ennio; Aliani, Stefano; Conversi, Alessandra; Griffa, Annalisa

    2016-01-01

    Marine plankton abundance and dynamics in the open and interior ocean is still an unknown field. The knowledge of gelatinous zooplankton distribution is especially challenging, because this type of plankton has a very fragile structure and cannot be directly sampled using traditional net based techniques. To overcome this shortcoming, Computer Vision techniques can be successfully used for the automatic monitoring of this group.This paper presents the GUARD1 imaging system, a low-cost stand-alone instrument for underwater image acquisition and recognition of gelatinous zooplankton, and discusses the performance of three different methodologies, Tikhonov Regularization, Support Vector Machines and Genetic Programming, that have been compared in order to select the one to be run onboard the system for the automatic recognition of gelatinous zooplankton. The performance comparison results highlight the high accuracy of the three methods in gelatinous zooplankton identification, showing their good capability in robustly selecting relevant features. In particular, Genetic Programming technique achieves the same performances of the other two methods by using a smaller set of features, thus being the most efficient in avoiding computationally consuming preprocessing stages, that is a crucial requirement for running on an autonomous imaging system designed for long lasting deployments, like the GUARD1. The Genetic Programming algorithm has been installed onboard the system, that has been operationally tested in a two-months survey in the Ligurian Sea, providing satisfactory results in terms of monitoring and recognition performances. PMID:27983638

  20. Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    1987-01-01

    The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reasoning. The third issue is hardware that supports symbolic processing in conjunction with standard data I/O and numeric processing. A SSP that currently is seen to be technologically feasible and is being developed is described and used as a baseline in the design concept.

  1. Looking inside the Ocean: Toward an Autonomous Imaging System for Monitoring Gelatinous Zooplankton.

    PubMed

    Corgnati, Lorenzo; Marini, Simone; Mazzei, Luca; Ottaviani, Ennio; Aliani, Stefano; Conversi, Alessandra; Griffa, Annalisa

    2016-12-14

    Marine plankton abundance and dynamics in the open and interior ocean is still an unknown field. The knowledge of gelatinous zooplankton distribution is especially challenging, because this type of plankton has a very fragile structure and cannot be directly sampled using traditional net based techniques. To overcome this shortcoming, Computer Vision techniques can be successfully used for the automatic monitoring of this group.This paper presents the GUARD1 imaging system, a low-cost stand-alone instrument for underwater image acquisition and recognition of gelatinous zooplankton, and discusses the performance of three different methodologies, Tikhonov Regularization, Support Vector Machines and Genetic Programming, that have been compared in order to select the one to be run onboard the system for the automatic recognition of gelatinous zooplankton. The performance comparison results highlight the high accuracy of the three methods in gelatinous zooplankton identification, showing their good capability in robustly selecting relevant features. In particular, Genetic Programming technique achieves the same performances of the other two methods by using a smaller set of features, thus being the most efficient in avoiding computationally consuming preprocessing stages, that is a crucial requirement for running on an autonomous imaging system designed for long lasting deployments, like the GUARD1. The Genetic Programming algorithm has been installed onboard the system, that has been operationally tested in a two-months survey in the Ligurian Sea, providing satisfactory results in terms of monitoring and recognition performances.

  2. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  3. Bionic Vision-Based Intelligent Power Line Inspection System

    PubMed Central

    Ma, Yunpeng; He, Feijia; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269

  4. Bionic Vision-Based Intelligent Power Line Inspection System.

    PubMed

    Li, Qingwu; Ma, Yunpeng; He, Feijia; Xi, Shuya; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions.

  5. Regulation of autonomic nervous system in space and magnetic storms

    NASA Astrophysics Data System (ADS)

    Baevsky, R. M.; Petrov, V. M.; Chernikova, A. G.

    Variations in the earth's magnetic field and magnetic storms are known to be a risk factor for the development of cardiovascular disorders. The main ``targets'' for geomagnetic perturbations are the central nervous system and the neural regulation of vascular tone and heart rate variability. This paper presents the data about effect of geomagnetic fluctuations on human body in space. As a method for research the analysis of heart rate variability was used, which allows evaluating the state of the sympathetic and parasympathetic parts of the autonomic nervous system, vasomotor center and subcortical neural centers activity. Heart rate variability data were analyzed for 30 cosmonauts at the 2-nd day of space flight on transport spaceship Soyuz (32nd orbit). There were formed three groups of cosmonauts: without magnetic storm (n=9), on a day with magnetic storm (n=12) and 1-2 days after magnetic storm (n=9). The present study was the first to demonstrate a specific impact of geomagnetic perturbations on the system of autonomic circulatory control in cosmonauts during space flight. The increasing of highest nervous centers activity was shown for group with magnetic storms, which was more significant on 1-2 days after magnetic storm. The use of discriminate analysis allowed to classify indicated three groups with 88 % precision. Canonical variables are suggested to be used as criterions for evaluation of specific and non-specific components of cardiovascular reactions to geomagnetic perturbations. The applied aspect of the findings from the present study should be emphasized. They show, in particular, the need to supplement the medical monitoring of cosmonauts with predictions of probable geomagnetic perturbations in view of the prevention of unfavorable states appearances if the adverse reactions to geomagnetic perturbations are added to the tension experienced by regulatory systems during various stresses situations (such as work in the open space).

  6. Calibration of a catadioptric omnidirectional vision system with conic mirror

    NASA Astrophysics Data System (ADS)

    Marcato Junior, J.; Tommaselli, A. M. G.; Moraes, M. V. A.

    2016-03-01

    Omnidirectional vision systems that enable 360° imaging have been widely used in several research areas, including close-range photogrammetry, which allows the accurate 3D measurement of objects. To achieve accurate results in Photogrammetric applications, it is necessary to model and calibrate these systems. The major contribution of this paper relates to the rigorous geometric modeling and calibration of a catadioptric, omnidirectional vision system that is composed of a wide-angle lens camera and a conic mirror. The indirect orientation of the omnidirectional images can also be estimated using this rigorous mathematical model. When calibrating the system, which is composed of a wide-angle camera and a conic mirror, misalignment of the conical mirror axis with respect to the camera's optical axis is a critical problem that must be considered in mathematical models. The interior calibration technique developed in this paper encompasses the following steps: wide-angle camera calibration; conic mirror modeling; and estimation of the transformation parameters between the camera and conic mirror reference systems. The main advantage of the developed technique is that it does not require accurate physical alignment between the camera and conic mirror axis. The exterior orientation is based on the properties of the conic mirror reflection. Experiments were conducted with images collected from a calibration field, and the results verified that the catadioptric omnidirectional system allows for the generation of ground coordinates with high geometric quality, provided that rigorous photogrammetric processes are applied.

  7. [A biotechnical system for diagnosis and treatment of binocular vision impairments].

    PubMed

    Korzhuk, N L; Shcheglova, M V

    2008-01-01

    Automation of the binocular vision biorhythm diagnosis and improvement of the efficacy of treatment of vision impairments are important medical problems. In authors' opinion, to solve these problems, it is necessary to take into account the correlation between the binocular vision and the electrical activity of the brain. A biotechnical system for diagnosis and treatment of binocular vision impairments was developed to implement diagnostic and treatment procedures based on the detection of this correlation.

  8. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  9. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  10. Mission-based guidance system design for autonomous UAVs

    NASA Astrophysics Data System (ADS)

    Moon, Jongki

    The advantages of UAVs in the aviation arena have led to extensive research activities on autonomous technology of UAVs to achieve specific mission objectives. This thesis mainly focuses on the development of a mission-based guidance system. Among various missions expected for future needs, autonomous formation flight (AFF) and obstacle avoidance within safe operation limits are investigated. In the design of an adaptive guidance system for AFF, the leader information except position is assumed to be unknown to a follower. Thus, the only measured information related to the leader is the line-of-sight (LOS) range and angle. Adding an adaptive element with neural networks into the guidance system provides a capability to effectively handle leader's velocity changes. Therefore, this method can be applied to the AFF control systems that use a passive sensing method. In this thesis, an adaptive velocity command guidance system and an adaptive acceleration command guidance system are developed and presented. Since relative degrees of the LOS range and angle are different depending on the outputs from the guidance system, the architecture of the guidance system changes accordingly. Simulations and flight tests are performed using the Georgia Tech UAV helicopter, the GTMax, to evaluate the proposed guidance systems. The simulation results show that the neural network (NN) based adaptive element can improve the tracking performance by effectively compensating for the effect of unknown dynamics. It has also been shown that the combination of an adaptive velocity command guidance system and the existing GTMax autopilot controller performs better than the combination of an adaptive acceleration command guidance system and the GTMax autopilot controller. The successful flight evaluation using an adaptive velocity command guidance system clearly shows that the adaptive guidance control system is a promising solution for autonomous formation flight of UAVs. In addition, an

  11. Autonomous Mobile Robot System for Monitoring and Control of Penetration during Fixed Pipes Welding

    NASA Astrophysics Data System (ADS)

    Muramatsu, Masahiro; Suga, Yasuo; Mori, Kazuhiro

    In order to obtain sound welded joints in the welding of horizontal fixed pipes, it is important to control the back bead width in the first pass. However, it is difficult to obtain optimum back bead width, because the proper welding conditions change with welding position. In this paper, in order to fully automatize the welding of fixed pipes, a new method is developed to control the back bead width with monitoring the shape and dimensions of the molten pool from the reverse side by autonomous mobile robot system. This robot has spherical shape so as to move in a complex route including curved pipe, elbow joint and so on. It has also a camera to observe inner surface of pipe and recognize a route in which the robot moves. The robot moves to welding point in the pipe, and monitors the reverse side shape of molten pool during welding. The host computer processes the images of molten pool acquired by the robot vision system, and calculates the optimum welding conditions to realize adaptive control of welding. As a result of the welding control experiments, the effectiveness of this system for the penetration control of fixed pipes is demonstrated.

  12. Autonomous control systems: applications to remote sensing and image processing

    NASA Astrophysics Data System (ADS)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  13. The Montana ALE (Autonomous Lunar Excavator) Systems Engineering Report

    NASA Technical Reports Server (NTRS)

    Hull, Bethanne J.

    2012-01-01

    On May 2 1-26, 20 12, the third annual NASA Lunabotics Mining Competition will be held at the Kennedy Space Center in Florida. This event brings together student teams from universities around the world to compete in an engineering challenge. Each team must design, build and operate a robotic excavator that can collect artificial lunar soil and deposit it at a target location. Montana State University, Bozeman, is one of the institutions selected to field a team this year. This paper will summarize the goals of MSU's lunar excavator project, known as the Autonomous Lunar Explorer (ALE), along with the engineering process that the MSU team is using to fulfill these goals, according to NASA's systems engineering guidelines.

  14. Challenges in verification and validation of autonomous systems for space exploration

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Jonsson, Ari

    2005-01-01

    Space exploration applications offer a unique opportunity for the development and deployment of autonomous systems, due to limited communications, large distances, and great expense of direct operation. At the same time, the risk and cost of space missions leads to reluctance to taking on new, complex and difficult-to-understand technology. A key issue in addressing these concerns is the validation of autonomous systems. In recent years, higher-level autonomous systems have been applied in space applications. In this presentation, we will highlight those autonomous systems, and discuss issues in validating these systems. We will then look to future demands on validating autonomous systems for space, identify promising technologies and open issues.

  15. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  16. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  17. Codesign Environment for Computer Vision Hw/Sw Systems

    NASA Astrophysics Data System (ADS)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  18. MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring

    NASA Astrophysics Data System (ADS)

    Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio

    1995-01-01

    In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.

  19. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    SciTech Connect

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  20. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  1. Autonomous self-powered structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Qing, Xinlin P.; Anton, Steven R.; Zhang, David; Kumar, Amrita; Inman, Daniel J.; Ooi, Teng K.

    2010-03-01

    Structural health monitoring technology is perceived as a revolutionary method of determining the integrity of structures involving the use of multidisciplinary fields including sensors, materials, system integration, signal processing and interpretation. The core of the technology is the development of self-sufficient systems for the continuous monitoring, inspection and damage detection of structures with minimal labor involvement. A major drawback of the existing technology for real-time structural health monitoring is the requirement for external electrical power input. For some applications, such as missiles or combat vehicles in the field, this factor can drastically limit the use of the technology. Having an on-board electrical power source that is independent of the vehicle power system can greatly enhance the SHM system and make it a completely self-contained system. In this paper, using the SMART layer technology as a basis, an Autonomous Self-powered (ASP) Structural Health Monitoring (SHM) system has been developed to solve the major challenge facing the transition of SHM systems into field applications. The architecture of the self-powered SHM system was first designed. There are four major components included in the SHM system: SMART Layer with sensor network, low power consumption diagnostic hardware, rechargeable battery with energy harvesting device, and host computer with supporting software. A prototype of the integrated self-powered active SHM system was built for performance and functionality testing. Results from the evaluation tests demonstrated that a fully charged battery system is capable of powering the SHM system for active scanning up to 10 hours.

  2. Autonomic Nervous System in Viral Myocarditis: Pathophysiology and Therapy.

    PubMed

    Cheng, Zheng; Li-Sha, Ge; Yue-Chun, Li

    2016-01-01

    Myocarditis, which is caused by viral infection, can lead to heart failure, malignant arrhythmias, and even sudden cardiac death in young patients. It is also one of the most important causes of dilated cardiomyopathy worldwide. Although remarkable advances in diagnosis and understanding of pathophysiological mechanisms of viral myocarditis have been gained during recent years, no standard treatment strategies have been defined as yet. Fortunately, recent studies present some evidence that immunomodulating therapy is effective for myocarditis. The immunomodulatory effect of the autonomic nervous system has raised considerable interest over recent decades. Studying the influence on the inflammation and immune system of the sympathetic and parasympathetic nervous systems will not only increase our understanding of the mechanism of disease but could also lead to the identification of potential new therapies for viral myocarditis. Studies have shown that the immunomodulating effect of the sympathetic and parasympathetic nervous system is realized by the release of neurotransmitters to their corresponding receptors (catecholamine for α or β adrenergic receptor, acetylcholine for α7 nicotinic acetylcholinergic receptor). This review will discuss the current knowledge of the roles of both the sympathetic and parasympathetic nervous system in inflammation, with a special focus on their roles in viral myocarditis.

  3. ANTS: Exploring the Solar System with an Autonomous Nanotechnology Swarm

    NASA Technical Reports Server (NTRS)

    Clark, P. E.; Curtis, S.; Rilee, M.; Truszkowski, W.; Marr, G.

    2002-01-01

    ANTS (Autonomous Nano-Technology Swarm), a NASA advanced mission concept, calls for a large (1000 member) swarm of pico-class (1 kg) totally autonomous spacecraft to prospect the asteroid belt. Additional information is contained in the original extended abstract.

  4. R-MASTIF: robotic mobile autonomous system for threat interrogation and object fetch

    NASA Astrophysics Data System (ADS)

    Das, Aveek; Thakur, Dinesh; Keller, James; Kuthirummal, Sujit; Kira, Zsolt; Pivtoraiko, Mihail

    2013-01-01

    Autonomous robotic "fetch" operation, where a robot is shown a novel object and then asked to locate it in the field, re- trieve it and bring it back to the human operator, is a challenging problem that is of interest to the military. The CANINE competition presented a forum for several research teams to tackle this challenge using state of the art in robotics technol- ogy. The SRI-UPenn team fielded a modified Segway RMP 200 robot with multiple cameras and lidars. We implemented a unique computer vision based approach for textureless colored object training and detection to robustly locate previ- ously unseen objects out to 15 meters on moderately flat terrain. We integrated SRI's state of the art Visual Odometry for GPS-denied localization on our robot platform. We also designed a unique scooping mechanism which allowed retrieval of up to basketball sized objects with a reciprocating four-bar linkage mechanism. Further, all software, including a novel target localization and exploration algorithm was developed using ROS (Robot Operating System) which is open source and well adopted by the robotics community. We present a description of the system, our key technical contributions and experimental results.

  5. Research into the Architecture of CAD Based Robot Vision Systems

    DTIC Science & Technology

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  6. Laser rangefinders for autonomous intelligent cruise control systems

    NASA Astrophysics Data System (ADS)

    Journet, Bernard A.; Bazin, Gaelle

    1998-01-01

    THe purpose of this paper is to show to what kind of application laser range-finders can be used inside Autonomous Intelligent Cruise Control systems. Even if laser systems present good performances the safety and technical considerations are very restrictive. As the system is used in the outside, the emitted average output power must respect the rather low level of 1A class. Obstacle detection or collision avoidance require a 200 meters range. Moreover bad weather conditions, like rain or fog, ar disastrous. We have conducted measurements on laser rangefinder using different targets and at different distances. We can infer that except for cooperative targets low power laser rangefinder are not powerful enough for long distance measurement. Radars, like 77 GHz systems, are better adapted to such cases. But in case of short distances measurement, range around 10 meters, with a minimum distance around twenty centimeters, laser rangefinders are really useful with good resolution and rather low cost. Applications can have the following of white lines on the road, the target being easily cooperative, detection of vehicles in the vicinity, that means car convoy traffic control or parking assistance, the target surface being indifferent at short distances.

  7. Computer vision system for three-dimensional inspection

    NASA Astrophysics Data System (ADS)

    Penafiel, Francisco; Fernandez, Luis; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    In the manufacturing process certain workpieces are inspected for dimensional measurement using sophisticated quality control techniques. During the operation phase, these parts are deformed due to the high temperatures involved in the process. The evolution of the workpieces structure is noticed on their dimensional modification. This evolution can be measured with a set of dimensional parameters. In this paper, a three dimensional automatic inspection of these parts is proposed. The aim is the measuring of some workpieces features through 3D control methods using directional lighting and a computer artificial vision system. The results of this measuring must be compared with the parameters obtained after the manufacturing process in order to determine the degree of deformation of the workpiece and decide whether it is still usable or not. Workpieces outside a predetermined specification range must be discarded and replaced by new ones. The advantage of artificial vision methods is based on the fact that there is no need to get in touch with the object to inspect. This makes feasible its use in hazardous environments, not suitable for human beings. A system has been developed and applied to the inspection of fuel assemblies in nuclear power plants. Such a system has been implemented in a very high level of radiation environment and operates in underwater conditions. The physical dimensions of a nuclear fuel assembly are modified after its operation in a nuclear power plant in relation to the original dimensions after its manufacturing. The whole system (camera, mechanical and illumination systems and the radioactive fuel assembly) is submerged in water for minimizing radiation effects and is remotely controlled by human intervention. The developed system has to inspect accurately a set of measures on the fuel assembly surface such as length, twists, arching, etc. The present project called SICOM (nuclear fuel assembly inspection system) is included into the R

  8. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  9. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    PubMed

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  10. Machine vision system for the control of tunnel boring machines

    NASA Astrophysics Data System (ADS)

    Habacher, Michael; O'Leary, Paul; Harker, Matthew; Golser, Johannes

    2013-03-01

    This paper presents a machine vision system for the control of dual-shield Tunnel Boring Machines. The system consists of a camera with ultra bright LED illumination and a target system consisting of multiple retro-reflectors. The camera mounted on the gripper shield measures the relative position and orientation of the target which is mounted on the cutting shield. In this manner the position of the cutting shield relative to the gripper shield is determined. Morphological operators are used to detect the retro-reflectors in the image and a covariance optimized circle fit is used to determine the center point of each reflector. A graph matching algorithm is used to ensure a robust matching of the constellation of the observed target with the ideal target geometry.

  11. A Portable Stereo Vision System for Whole Body Surface Imaging.

    PubMed

    Yu, Wurong; Xu, Bugao

    2010-04-01

    This paper presents a whole body surface imaging system based on stereo vision technology. We have adopted a compact and economical configuration which involves only four stereo units to image the frontal and rear sides of the body. The success of the system depends on a stereo matching process that can effectively segment the body from the background in addition to recovering sufficient geometric details. For this purpose, we have developed a novel sub-pixel, dense stereo matching algorithm which includes two major phases. In the first phase, the foreground is accurately segmented with the help of a predefined virtual interface in the disparity space image, and a coarse disparity map is generated with block matching. In the second phase, local least squares matching is performed in combination with global optimization within a regularization framework, so as to ensure both accuracy and reliability. Our experimental results show that the system can realistically capture smooth and natural whole body shapes with high accuracy.

  12. Categorisation through evidence accumulation in an active vision system

    NASA Astrophysics Data System (ADS)

    Mirolli, Marco; Ferrauto, Tomassino; Nolfi, Stefano

    2010-12-01

    In this paper, we present an artificial vision system that is trained with a genetic algorithm for categorising five different kinds of images (letters) of different sizes. The system, which has a limited field of view, can move its eye so as to explore the images visually. The analysis of the system at the end of the training process indicates that correct categorisation is achieved by (1) exploiting sensory-motor coordination so as to experience stimuli that facilitate discrimination, and (2) integrating perceptual and/or motor information over time through a process of accumulation of partially conflicting evidence. We discuss our results with respect to the possible different strategies for categorisation and to the possible roles that action can play in perception.

  13. Lagrangian-Hamiltonian unified formalism for autonomous higher order dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2011-09-01

    The Lagrangian-Hamiltonian unified formalism of Skinner and Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, as well as for first-order and higher order field theories. However, a complete generalization to higher order mechanical systems is yet to be described. In this work, after reviewing the natural geometrical setting and the Lagrangian and Hamiltonian formalisms for higher order autonomous mechanical systems, we develop a complete generalization of the Lagrangian-Hamiltonian unified formalism for these kinds of systems, and we use it to analyze some physical models from this new point of view.

  14. Autonomous perception and decision making in cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Sarkar, Soumik

    2011-07-01

    The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and

  15. Bio-inspired vision

    NASA Astrophysics Data System (ADS)

    Posch, C.

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980`s, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ``neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  16. DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems

    SciTech Connect

    Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.; Fink, Glenn A.; Bakken, David E.

    2011-02-01

    For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing the autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.

  17. An Approach to Autonomous Control for Space Nuclear Power Systems

    SciTech Connect

    Wood, Richard Thomas; Upadhyaya, Belle R.

    2011-01-01

    Under Project Prometheus, the National Aeronautics and Space Administration (NASA) investigated deep space missions that would utilize space nuclear power systems (SNPSs) to provide energy for propulsion and spacecraft power. The initial study involved the Jupiter Icy Moons Orbiter (JIMO), which was proposed to conduct in-depth studies of three Jovian moons. Current radioisotope thermoelectric generator (RTG) and solar power systems cannot meet expected mission power demands, which include propulsion, scientific instrument packages, and communications. Historically, RTGs have provided long-lived, highly reliable, low-power-level systems. Solar power systems can provide much greater levels of power, but power density levels decrease dramatically at {approx} 1.5 astronomical units (AU) and beyond. Alternatively, an SNPS can supply high-sustained power for space applications that is both reliable and mass efficient. Terrestrial nuclear reactors employ varying degrees of human control and decision-making for operations and benefit from periodic human interaction for maintenance. In contrast, the control system of an SNPS must be able to provide continuous operatio for the mission duration with limited immediate human interaction and no opportunity for hardware maintenance or sensor calibration. In effect, the SNPS control system must be able to independently operate the power plant while maintaining power production even when subject to off-normal events and component failure. This capability is critical because it will not be possible to rely upon continuous, immediate human interaction for control due to communications delays and periods of planetary occlusion. In addition, uncertainties, rare events, and component degradation combine with the aforementioned inaccessibility and unattended operation to pose unique challenges that an SNPS control system must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design.

  18. The 3-D vision system integrated dexterous hand

    NASA Technical Reports Server (NTRS)

    Luo, Ren C.; Han, Youn-Sik

    1989-01-01

    Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.

  19. Active vision in marmosets: a model system for visual neuroscience.

    PubMed

    Mitchell, Jude F; Reynolds, John H; Miller, Cory T

    2014-01-22

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms.

  20. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  1. A database/knowledge structure for a robotics vision system

    NASA Technical Reports Server (NTRS)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  2. Wearable design issues for electronic vision enhancement systems

    NASA Astrophysics Data System (ADS)

    Dvorak, Joe

    2006-09-01

    As the baby boomer generation ages, visual impairment will overtake a significant portion of the US population. At the same time, more and more of our world is becoming digital. These two trends, coupled with the continuing advances in digital electronics, argue for a rethinking in the design of aids for the visually impaired. This paper discusses design issues for electronic vision enhancement systems (EVES) [R.C. Peterson, J.S. Wolffsohn, M. Rubinstein, et al., Am. J. Ophthalmol. 136 1129 (2003)] that will facilitate their wearability and continuous use. We briefly discuss the factors affecting a person's acceptance of wearable devices. We define the concept of operational inertia which plays an important role in our design of wearable devices and systems. We then discuss how design principles based upon operational inertia can be applied to the design of EVES.

  3. [Formal care systems consequences of a vision on informal caretakers].

    PubMed

    Escuredo Rodríguez, Bibiana

    2006-10-01

    Care for dependent persons falls, fundamentally, on their family members who usually perceive this situation as a problem due to its repercussions on the family group in general and on the health and quality of life for the informal caretaker in particular. The burden which an informal caretaker assumes depends on diverse variables among which the most important are considered to be social assistance and the forms of help which the caretaker has to rely on. At the same time, the resources and help available are determined by the vision which the formal system has for informal caretakers; therefore, it is important that nurses, as caretakers in the formal system, have a clear idea about the situations that are created and that nurses reflect on the alternatives which allow a dependent person to be cared for without forgetting the needs and rights of the caretakers.

  4. Exploration Medical Capability System Engineering Introduction and Vision

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Reilly, J.

    2017-01-01

    Human exploration missions to beyond low Earth orbit destinations such as Mars will require more autonomous capability compared to current low Earth orbit operations. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its integrative goals. This talk will briefly introduce the discipline of systems engineering and key points in its application to exploration medical capability development. It will elucidate technical medical system needs to be met by the systems engineering work, and the structured and integrative science and engineering approach to satisfying those needs, including the development of shared mental and qualitative models within and external to the human health and performance community. These efforts are underway to ensure relevancy to exploration system maturation and to establish medical system development that is collaborative with vehicle and mission design and engineering efforts.

  5. Multifunctional astronomical self-organizing system of autonomous navigation and orientation for artificial Earth satellites

    NASA Astrophysics Data System (ADS)

    Kuznetsov, V. I.; Danilova, T. V.

    2017-03-01

    We describe the methods and algorithms of a multifunctional astronomical system of the autonomous navigation and orientation for artificial Earth satellites based on the automatization of the system approach to the design and programming problems of the subject area.

  6. Autonomous exploration system: Techniques for interpretation of multispectral data

    NASA Technical Reports Server (NTRS)

    Yates, Gigi; Eberlein, Susan

    1989-01-01

    An on-board autonomous exploration system that fuses data from multiple sensors, and makes decisions based on scientific goals is being developed using a series of artificial neural networks. Emphasis is placed on classifying minerals into broad geological categories by analyzing multispectral data from an imaging spectrometer. Artificial neural network architectures are being investigated for pattern matching and feature detection, information extraction, and decision making. As a first step, a stereogrammetry net extracts distance data from two gray scale stereo images. For each distance plane, the output is the probable mineral composition of the region, and a list of spectral features such as peaks, valleys, or plateaus, showing the characteristics of energy absorption and reflection. The classifier net is constructed using a grandmother cell architecture: an input layer of spectral data, an intermediate processor, and an output value. The feature detector is a three-layer feed-forward network that was developed to map input spectra to four geological classes, and will later be expanded to encompass more classes. Results from the classifier and feature detector nets will help to determine the relative importance of the region being examined with regard to current scientific goals of the system. This information is fed into a decision making neural net along with data from other sensors to decide on a plan of activity. A plan may be to examine the region at higher resolution, move closer, employ other sensors, or record an image and transmit it back to Earth.

  7. Central Command Architecture for High Order Autonomous Unmanned Systems

    NASA Astrophysics Data System (ADS)

    Bieber, Chad Michael

    This dissertation describes a High-Order Central Command (HOCC) architecture and presents a flight demonstration where a single user coordinates 4 unmanned fixed-wing aircraft. HOCC decouples the user from control of individual vehicles, eliminating human limits on the size of the system, and uses a non-iterative sequence of algorithms that permit easy estimation of how computational complexity scales. The Hungarian algorithm used to solve a min-sum assignment with a one-task planning horizon becomes the limiting complexity, scaling at O(x3) where x is the larger number of vehicles or tasks in the assignment. This method is shown to have a unique property of creating non-intersecting routes which is used to drastically reduce the computational cost of deconflicting planned routes. Results from several demonstration flights are presented where a single user commands a system of 4 fixed-wing aircraft. The results confirm that autonomous flight of a large number of UAVs is a bona fide engineering sub-discipline, which is expected to be of interest to engineers who will find its utility in the aviation industry and in other emerging markets.

  8. Systems and methods for autonomously controlling agricultural machinery

    DOEpatents

    Hoskinson, Reed L.; Bingham, Dennis N.; Svoboda, John M.; Hess, J. Richard

    2003-07-08

    Systems and methods for autonomously controlling agricultural machinery such as a grain combine. The operation components of a combine that function to harvest the grain have characteristics that are measured by sensors. For example, the combine speed, the fan speed, and the like can be measured. An important sensor is the grain loss sensor, which may be used to quantify the amount of grain expelled out of the combine. The grain loss sensor utilizes the fluorescence properties of the grain kernels and the plant residue to identify when the expelled plant material contains grain kernels. The sensor data, in combination with historical and current data stored in a database, is used to identify optimum operating conditions that will result in increased crop yield. After the optimum operating conditions are identified, an on-board computer can generate control signals that will adjust the operation of the components identified in the optimum operating conditions. The changes result in less grain loss and improved grain yield. Also, because new data is continually generated by the sensor, the system has the ability to continually learn such that the efficiency of the agricultural machinery is continually improved.

  9. The Use of Autonomous Systems in Emergency Medical Services: Bridging Human Intelligence and Technology

    DTIC Science & Technology

    2015-12-01

    development of autonomous systems (AS), which are technological systems or processes that either support or replace human decision making , will have a...autonomous systems (AS), which are technological systems or processes that either support or replace human decision making , will have a significant impact on...support or replace human decision making , will have a significant impact on public safety services, including EMS. EMS provider organizations must be

  10. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  11. Changes of autonomic nervous system function in patients with breath-holding spells treated with iron.

    PubMed

    Orii, Kenji E; Kato, Zenichiro; Osamu, Fukutomi; Funato, Michinori; Kubodera, Uniko; Inoue, Ryosuke; Shimozawa, Nobuyuki; Kondo, Naomi

    2002-05-01

    To evaluate the autonomic nervous system of patients with breath-holding spells after iron treatment, we attempted to determine whether a dysregulation of the autonomic nervous system reflexes exists in children with severe cyanotic breathholding spells. An electrocardiogram for each subject was recorded for 24 hours in the subject's home and parasympathetic activity was investigated by the fast Fourier transform method. Hematologic data and clinical symptoms of all three patients treated with iron improved and attacks of severe breath-holding spells disappeared. After iron treatment was started, the heart rate variability increased during sleep. It appears that supplementation of iron is effective in improving the dysregulation of autonomic nervous system reflexes.

  12. The autonomic nervous system regulates postprandial hepatic lipid metabolism.

    PubMed

    Bruinstroop, Eveline; la Fleur, Susanne E; Ackermans, Mariette T; Foppen, Ewout; Wortel, Joke; Kooijman, Sander; Berbée, Jimmy F P; Rensen, Patrick C N; Fliers, Eric; Kalsbeek, Andries

    2013-05-15

    The liver is a key organ in controlling glucose and lipid metabolism during feeding and fasting. In addition to hormones and nutrients, inputs from the autonomic nervous system are also involved in fine-tuning hepatic metabolic regulation. Previously, we have shown in rats that during fasting an intact sympathetic innervation of the liver is essential to maintain the secretion of triglycerides by the liver. In the current study, we hypothesized that in the postprandial condition the parasympathetic input to the liver inhibits hepatic VLDL-TG secretion. To test our hypothesis, we determined the effect of selective surgical hepatic denervations on triglyceride metabolism after a meal in male Wistar rats. We report that postprandial plasma triglyceride concentrations were significantly elevated in parasympathetically denervated rats compared with control rats (P = 0.008), and VLDL-TG production tended to be increased (P = 0.066). Sympathetically denervated rats also showed a small rise in postprandial triglyceride concentrations (P = 0.045). On the other hand, in rats fed on a six-meals-a-day schedule for several weeks, a parasympathetic denervation resulted in >70% higher plasma triglycerides during the day (P = 0.001), whereas a sympathetic denervation had no effect. Our results show that abolishing the parasympathetic input to the liver results in increased plasma triglyceride levels during postprandial conditions.

  13. Maternal rearing environment impacts autonomic nervous system activity.

    PubMed

    Bliss-Moreau, Eliza; Moadab, Gilda; Capitanio, John P

    2017-04-03

    While it is now well known that social deprivation during early development permanently perturbs affective responding, accumulating evidence suggests that less severe restriction of the early social environment may also have deleterious effects. In the present report, we evaluate the affective responding of rhesus macaque (Macaca mulatta) infants raised by their mothers in restricted social environments or by their mothers in large social groups by indexing autonomic nervous system activity. Following a 25-hr evaluation of biobehavioral organization, electrocardiogram, and an index of respiration were recorded for 10 min. This allowed for an evaluation of both heart rate and respiratory sinus arrhythmia (RSA), an index of parasympathetic activity, during a challenging situation. Three- to four-month-old infants raised in restricted social environments had significantly higher heart rates and lower RSA as compared to infants raised in unrestricted social environments, consistent with a more potent stress response to the procedure. These results are consistent with mounting evidence that the environment in which individuals are raised has important consequences for affective processing.

  14. Bluefin autonomous underwater vehicles: Programs, systems, and acoustic issues

    NASA Astrophysics Data System (ADS)

    Bondaryk, Joseph E.

    2001-05-01

    Bluefin Robotics Corporation has been manufacturing autonomous underwater vehicles (AUVs) since spinning out of the MIT Sea Grant Laboratory in 1997. Bluefin currently makes three different diameter models of AUVs; the 9, 12, and 21, all based on the same free-flooded architecture and vectored-thrust propulsion design. Auxiliary acoustic systems include acoustic abort, ranging beacons, and acoustic modems. Vehicle navigation is aided by a downward-looking acoustic Doppler velocity logger (DVL). Sonar payloads can include: bottom profiler, side-scan sonar, SAS, forward-looking imagers (DIDSON), as well as horizontal and vertical discrete hydrophone arrays. Acoustic issues that arise include: (1) transmission of sound through the ABS plastic vehicle shell; (2) the impact of vehicle self-noise on data; (3) interoperability of sonars with other acoustic emitters present on and off the vehicle; and (4) the impact of navigation on some acoustic operations like SAS. This talk will illustrate these issues with real data collected on various Bluefin vehicles.

  15. Rapid laser prototyping of valves for microfluidic autonomous systems

    NASA Astrophysics Data System (ADS)

    Mohammed, M. I.; Abraham, E.; Y Desmulliez, M. P.

    2013-03-01

    Capillary forces in microfluidics provide a simple yet elegant means to direct liquids through flow channel networks. The ability to manipulate the flow in a truly automated manner has proven more problematic. The majority of valves require some form of flow control devices, which are manually, mechanically or electrically driven. Most demonstrated capillary systems have been manufactured by photolithography, which, despite its high precision and repeatability, can be labour intensive, requires a clean room environment and the use of fixed photomasks, limiting thereby the agility of the manufacturing process to readily examine alternative designs. In this paper, we describe a robust and rapid CO2 laser manufacturing process and demonstrate a range of capillary-driven microfluidic valve structures embedded within a microfluidic network. The manufacturing process described allows for advanced control and manipulation of fluids such that flow can be halted, triggered and delayed based on simple geometrical alterations to a given microchannel. The rapid prototyping methodology has been employed with PMMA substrates and a complete device has been created, ready for use, within 2-3 h. We believe that this agile manufacturing process can be applied to produce a range of complex autonomous fluidic platforms and allows subsequent designs to be rapidly explored.

  16. Fuzzy Logic Based Autonomous Parallel Parking System with Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Panomruttanarug, Benjamas; Higuchi, Kohji

    This paper presents an emulation of fuzzy logic control schemes for an autonomous parallel parking system in a backward maneuver. There are four infrared sensors sending the distance data to a microcontroller for generating an obstacle-free parking path. Two of them mounted on the front and rear wheels on the parking side are used as the inputs to the fuzzy rules to calculate a proper steering angle while backing. The other two attached to the front and rear ends serve for avoiding collision with other cars along the parking space. At the end of parking processes, the vehicle will be in line with other parked cars and positioned in the middle of the free space. Fuzzy rules are designed based upon a wall following process. Performance of the infrared sensors is improved using Kalman filtering. The design method needs extra information from ultrasonic sensors. Starting from modeling the ultrasonic sensor in 1-D state space forms, one makes use of the infrared sensor as a measurement to update the predicted values. Experimental results demonstrate the effectiveness of sensor improvement.

  17. Abnormally Malicious Autonomous Systems and their Internet Connectivity

    SciTech Connect

    Shue, Craig A; Kalafut, Prof. Andrew; Gupta, Prof. Minaxi

    2011-01-01

    While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using ten popular blacklists, plus local spam data, and extensive DNS resolutions based on the contents of the blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted. Yet others account for large fractions of blacklisted IP addresses. Several ASes regularly peer with ASes associated with significant malicious activity. We also find that malicious ASes as a whole differ from benign ones in other properties not obviously related to their malicious activities, such as more frequent connectivity changes with their BGP peers. Overall, we conclude that examining malicious activity at AS granularity can unearth networks with lax security or those that harbor cybercrime.

  18. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  19. Survey of computer vision in roadway transportation systems

    NASA Astrophysics Data System (ADS)

    Manikoth, Natesh; Loce, Robert; Bernal, Edgar; Wu, Wencheng

    2012-01-01

    There is a world-wide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This conference presentation and publication is brief introduction to the field, and will be followed by an in-depth journal paper that provides more details on the imaging systems and algorithms.

  20. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    SciTech Connect

    Harp, Benton; Charboneau, Stacy; Olds, Erik

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations of the WTP are not only dependent upon the successful

  1. A Novel Vision Sensing System for Tomato Quality Detection.

    PubMed

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%.

  2. [Influence of vagal stimulation by deep breathing and sham feeding on autonomic system response in healthy women].

    PubMed

    Furgała, Agata; Mazur, Marcel; Kolasińska-Kloch, Władysława; Thor, Piotr J

    2008-01-01

    Sham feeding (SF) opposite to deep breathing (DB) is not standard method for testing autonomic nervous system but is widely used for parasympathetic stimulation of gastrointestinal secretion and motility. Differences in vagal autonomic system response induced by both methods resulted from different localization of receptors and cortical and spinal autonomic centers.

  3. Security Enhancement of Littoral Combat Ship Class Utilizing an Autonomous Mustering and Pier Monitoring System

    DTIC Science & Technology

    2010-03-01

    of-Concept System utilized and their function. Software Name Function MATLAB Performed Facial Detection, Recognition Golden FTP Server (Freeware...proposed solution is an autonomous system utilizing facial recognition software to maintain a muster of the ship’s crew, while in parallel monitoring the...generic solution are proposed. The proposed solution is an autonomous system utilizing facial recognition software to maintain a muster of the ship’s

  4. Autonomous Agents: The Origins and Co-Evolution of Reproducing Molecular Systems

    NASA Technical Reports Server (NTRS)

    Kauffman, Stuart

    1999-01-01

    The central aim of this award concerned an investigation into, and adequate formulation of, the concept of an "autonomous agent." If we consider a bacterium swimming upstream in a glucose gradient, we are willing to say of the bacterium that it is going to get food. That is, we are willing, and do, describe the bacterium as acting on its own behalf in an environment. All free living cells are, in this sense, autonomous agents. But the bacterium is "just" a set of molecules. We define an autonomous agent as a physical system able to act on its own behalf in an environment, then ask, "What must a physical system be to be an autonomous agent?" The tentative definition for a molecular autonomous agent is that it must be self-reproducing and carry out at least one thermodynamic work cycle. The work carried out in this grant involved, among other features, the development of a detailed model of a molecular autonomous agent, and study of the kinetics of this system. In particular, a molecular autonomous agent must, by the above tentative definition, not only reproduce, but must carry out at least one work cycle. I took, as a simple example of a self-reproducing molecular system, the single-stranded DNA hexamer 3'CCGCGG5' which can line up and ligate its two complementary trimers, 5'CCG3' and 5'CGG3'. But the two ligated trimers constitute the same molecular sequence in the 3' to 5' direction as the initial hexamer, hence this system is autocatalytic. On the other hand the above system is not yet an autonomous agent. At the minimum, autonomous agents, as I have defined them, are a new class of chemical reaction network. At a maximum, they may constitute a proper definition of life itself.

  5. Synthetic vision system flight test results and lessons learned

    NASA Technical Reports Server (NTRS)

    Radke, Jeffrey

    1993-01-01

    Honeywell Systems and Research Center developed and demonstrated an active 35 GHz Radar Imaging system as part of the FAA/USAF/Industry sponsored Synthetic Vision System Technology Demonstration (SVSTD) Program. The objectives of this presentation are to provide a general overview of flight test results, a system level perspective that encompasses the efforts of the SVSTD and Augmented VIsual Display (AVID) programs, and more importantly, provide the AVID workshop participants with Honeywell's perspective on the lessons that were learned from the SVS flight tests. One objective of the SVSTD program was to explore several known system issues concerning radar imaging technology. The program ultimately resolved some of these issues, left others open, and in fact created several new concerns. In some instances, the interested community has drawn improper conclusions from the program by globally attributing implementation specific issues to radar imaging technology in general. The motivation for this presentation is therefore to provide AVID researchers with a better understanding of the issues that truly remain open, and to identify the perceived issues that are either resolved or were specific to Honeywell's implementation.

  6. Vision for an Open, Global Greenhouse Gas Information System (GHGIS)

    NASA Astrophysics Data System (ADS)

    Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team

    2010-12-01

    Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.

  7. Active vision system integrating fast and slow processes

    NASA Astrophysics Data System (ADS)

    Castrillon-Santana, Modesto; Guerra-Artal, C.; Hernandez-Sosa, J.; Dominguez-Brito, A.; Isern-Gonzalez, J.; Cabrera-Gamez, Jorge; Hernandez-Tejera, F. M.

    1998-10-01

    This paper describes an Active Vision System whose design assumes a distinction between fast or reactive and slow or background processes. Fast processes need to operate in cycles with critical timeouts that may affect system stability. While slow processes, though necessary, do not compromise system stability if its execution is delayed. Based on this simple taxonomy, a control architecture has been proposed and a prototype implemented that is able to track people in real-time with a robotic head while trying to identify the target. In this system, the tracking mobile is considered as the reactive part of the system while person identification is considered a background task. This demonstrator has been developed using a new generation DSP (TMS320C80) as a specialized coprocessor to deal with fast processes, and a commercial robotic head with a dedicated DSP-based motor controller. These subsystems are hosted by a standard Pentium-Pro PC running Windows NT where slow processes are executed. The flexibility achieved in the design phase and the preliminary results obtained so far seem to validate the approach followed to integrate time- critical and slow tasks on a heterogeneous hardware platform.

  8. 78 FR 34935 - Revisions to Operational Requirements for the Use of Enhanced Flight Vision Systems (EFVS) and to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ... operators to use an Enhanced Flight Vision System (EFVS) in lieu of natural vision to continue descending... proficiency would be required for operators who use EFVS in lieu of natural vision to descend below decision... zone elevation. Natural vision must be used below 100 feet. Sections 121.651(c), 125.325,...

  9. Central and autonomic system signs with in utero drug exposure

    PubMed Central

    Bada, H; Bauer, C; Shankaran, S; Lester, B; Wright, L; Das, A; Poole, K; Smeriglio, V; Finnegan, L; Maza, P

    2002-01-01

    Aims: To determine risk for central nervous system/autonomic nervous system (CNS/ANS) signs following in utero cocaine and opiate exposure. Methods: A multisite study was designed to determine outcomes of in utero cocaine and opiate exposure. A total of 11 811 maternal/infant dyads were enrolled. Drug exposed (EXP) infants were identified by maternal self report of cocaine or opiate use or by meconium testing. Of 1185 EXP, meconium analysis confirmed exposure in 717 to cocaine (CO) only, 100 to opiates (OP), and 92 to opiates plus cocaine (OP+CO); 276 had insufficient or no meconium to confirm maternal self report. Negative exposure history was confirmed in 7442 by meconium analysis and unconfirmed in 3184. Examiners masked to exposure status, assessed each enrolled infant. Using generalised estimating equations, adjusted odds ratios (OR) and 95% confidence intervals (CI) were estimated for manifesting a constellation of CNS/ANS outcomes and for each sign associated with cocaine and opiate exposure. Results: Prevalence of CNS/ANS signs was low in CO, and highest in OP+CO. Signs were significantly related to one another. After controlling for confounders, CO was associated with increased risk of manifesting a constellation of CNS/ANS outcomes, OR (95% CI): 1.7 (1.2 to 2.2), independent of OP effect, OR (95% CI): 2.8 (2.1 to 3.7). OP+CO had additive effects, OR (95% CI): 4.8 (2.9 to 7.9). Smoking also increased the risk for the constellation of CNS/ANS signs, OR (95% CI) of 1.3 (1.04 to 1.55) and 1.4 (1.2 to 1.6), respectively, for use of less than half a pack per day and half a pack per day or more. Conclusion: Cocaine or opiate exposure increases the risk for manifesting a constellation of CNS/ANS outcomes. PMID:12193516

  10. Rapid Onboard Trajectory Design for Autonomous Spacecraft in Multibody Systems

    NASA Astrophysics Data System (ADS)

    Trumbauer, Eric Michael

    This research develops automated, on-board trajectory planning algorithms in order to support current and new mission concepts. These include orbiter missions to Phobos or Deimos, Outer Planet Moon orbiters, and robotic and crewed missions to small bodies. The challenges stem from the limited on-board computing resources which restrict full trajectory optimization with guaranteed convergence in complex dynamical environments. The approach taken consists of leveraging pre-mission computations to create a large database of pre-computed orbits and arcs. Such a database is used to generate a discrete representation of the dynamics in the form of a directed graph, which acts to index these arcs. This allows the use of graph search algorithms on-board in order to provide good approximate solutions to the path planning problem. Coupled with robust differential correction and optimization techniques, this enables the determination of an efficient path between any boundary conditions with very little time and computing effort. Furthermore, the optimization methods developed here based on sequential convex programming are shown to have provable convergence properties, as well as generating feasible major iterates in case of a system interrupt -- a key requirement for on-board application. The outcome of this project is thus the development of an algorithmic framework which allows the deployment of this approach in a variety of specific mission contexts. Test cases related to missions of interest to NASA and JPL such as a Phobos orbiter and a Near Earth Asteroid interceptor are demonstrated, including the results of an implementation on the RAD750 flight processor. This method fills a gap in the toolbox being developed to create fully autonomous space exploration systems.

  11. The Application of Lidar to Synthetic Vision System Integrity

    NASA Technical Reports Server (NTRS)

    Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve

    2003-01-01

    One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.

  12. An Expert Vision System for Medical Image Segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Shiuh-Yung J.; Lin, Wei-Chung; Chen, Chin-Tu

    1989-05-01

    In this paper, an expert vision system is proposed which integrates knowledge from diverse sources for tomographic image segmentation. The system miinicks the reasoning process of an expert to divide a tomographic brain image into semantically meaningful entities. These entities can then be related to the fundamental biomedical processes, both in health and in disease, that are of interest or of importance to health care research. The images under study include those acquired from x-ray CT (Computed Tomography), MRI (Magnetic Resonance Imaging), and PET (Positron Emission Tomography). Given a set of three (correlated) images acquired from these three different modalities at the same slicing level and angle of a human brain, the proposed system performs image segmentation based on (1) knowledge about the characteristics of the three different sensors, (2) knowledge about the anatomic structures of human brains, (3) knowledge about brain diseases, and (4) knowledge about image processing and analysis tools. Since the problem domain is characterized by incomplete and uncertain information, the blackboard architecture which is an opportunistic reasoning model is adopted as the framework of the proposed system.

  13. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  14. Space station automation study. Volume 1: Executive summary. Autonomous systems and assembly

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The purpose of the Space Station Automation Study (SSAS) was to develop informed technical guidance for NASA personnel in the use of autonomy and autonomous systems to implement space station functions.

  15. Robo-AO: An Autonomous Laser Adaptive Optics and Science System

    NASA Astrophysics Data System (ADS)

    Baranec, Christoph; Riddle, Reed; Ramaprakash, A. N.; Law, Nicholas; Tendulkar, Shriharsh; Kulkarni, Shrinivas; Dekany, Richard; Bui, Khanh; Davis, Jack; Zolkower, Jeff; Fucik, Jason; Burse, Mahesh; Das, Hillol; Chordia, Pravin; Kasliwal, Mansi; Ofek, Eran; Morton, Timothy; Johnson, John

    2011-07-01

    Robo-AO, a fully autonomous, laser guide star adaptive optics and science system, is being commissioned at Palomar Observatory's 60-inch telescope. Here we discuss the instrument, scientific goals and results of initial on-sky operation.

  16. The analysis of measurement accuracy of the parallel binocular stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Huan; Xing, Tingwen; Jia, Xin

    2016-09-01

    Parallel binocular stereo vision system is a special form of binocular vision system. In order to simulate the human eyes observation state, the two cameras used to obtain images of the target scene are placed parallel to each other. This paper built a triangular geometric model, analyzed the structure parameters of parallel binocular stereo vision system and the correlations between them, and discussed the influences of baseline distance B between two cameras, the focal length f, the angle of view ω and other structural parameters on the accuracy of measurement. This paper used Matlab software to test the error function of parallel binocular stereo vision system under different structure parameters, and the simulation results showed the range of structure parameters when errors were small, thereby improved the accuracy of parallel binocular stereo vision system.

  17. New vision solar system mission study. Final report

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  18. A stochastic perturbation theory for non-autonomous systems

    NASA Astrophysics Data System (ADS)

    Moon, Woosok; Wettlaufer, John

    2014-05-01

    We develop a perturbation theory for a class of first order nonlinear non-autonomous stochastic ordinary differential equations that arise in climate physics. The perturbative procedure produces moments in terms of integral delay equations, whose order by order decay is characterized in a Floquet-like sense. Both additive and multiplicative sources of noise are discussed and the question of how the nature of the noise influences the results is addressed theoretically and numerically. By invoking the Martingale property, we rationalize the transformation of the underlying Stratonovich form of the model to an Ito form, independent of whether the noise is additive or multiplicative. The generality of the analysis is demonstrated by developing it both for a Brownian particle moving in a periodically forced quartic potential, which acts as a simple model of stochastic resonance, as well as for our more complex climate physics model. The validity of the approach is shown by comparison with numerical solutions. The particular climate dynamics problem upon which we focus involves a low-order model for the evolution of Arctic sea ice under the influence of increasing greenhouse gas forcing ΔF0. The deterministic model, developed by Eisenman and Wettlaufer EW09 exhibits several transitions as ΔF0 increases and the stochastic analysis is used to understand the manner in which noise influences these transitions and the stability of the system. Eisenman, I., and J. S. Wettlaufer, 'Nonlinear threshold behavior during the loss of Arctic sea ice,' Proc. Natl. Acad. Sci. USA, 106, 28-32, 2009.

  19. A stochastic perturbation theory for non-autonomous systems

    NASA Astrophysics Data System (ADS)

    Moon, W.; Wettlaufer, J. S.

    2013-12-01

    We develop a perturbation theory for a class of first order nonlinear non-autonomous stochastic ordinary differential equations that arise in climate physics. The perturbative procedure produces moments in terms of integral delay equations, whose order by order decay is characterized in a Floquet-like sense. Both additive and multiplicative sources of noise are discussed and the question of how the nature of the noise influences the results is addressed theoretically and numerically. By invoking the Martingale property, we rationalize the transformation of the underlying Stratonovich form of the model to an Ithat{o} form, independent of whether the noise is additive or multiplicative. The generality of the analysis is demonstrated by developing it both for a Brownian particle moving in a periodically forced quartic potential, which acts as a simple model of stochastic resonance, as well as for our more complex climate physics model. The validity of the approach is shown by comparison with numerical solutions. The particular climate dynamics problem upon which we focus involves a low-order model for the evolution of Arctic sea ice under the influence of increasing greenhouse gas forcing ΔF0. The deterministic model, developed by Eisenman and Wettlaufer ["Nonlinear threshold behavior during the loss of Arctic sea ice," Proc. Natl. Acad. Sci. U.S.A. 106(1), 28-32 (2009)] exhibits several transitions as ΔF0 increases and the stochastic analysis is used to understand the manner in which noise influences these transitions and the stability of the system.

  20. A stochastic perturbation theory for non-autonomous systems

    SciTech Connect

    Moon, W.; Wettlaufer, J. S.

    2013-12-15

    We develop a perturbation theory for a class of first order nonlinear non-autonomous stochastic ordinary differential equations that arise in climate physics. The perturbative procedure produces moments in terms of integral delay equations, whose order by order decay is characterized in a Floquet-like sense. Both additive and multiplicative sources of noise are discussed and the question of how the nature of the noise influences the results is addressed theoretically and numerically. By invoking the Martingale property, we rationalize the transformation of the underlying Stratonovich form of the model to an Ito form, independent of whether the noise is additive or multiplicative. The generality of the analysis is demonstrated by developing it both for a Brownian particle moving in a periodically forced quartic potential, which acts as a simple model of stochastic resonance, as well as for our more complex climate physics model. The validity of the approach is shown by comparison with numerical solutions. The particular climate dynamics problem upon which we focus involves a low-order model for the evolution of Arctic sea ice under the influence of increasing greenhouse gas forcing ΔF{sub 0}. The deterministic model, developed by Eisenman and Wettlaufer [“Nonlinear threshold behavior during the loss of Arctic sea ice,” Proc. Natl. Acad. Sci. U.S.A. 106(1), 28–32 (2009)] exhibits several transitions as ΔF{sub 0} increases and the stochastic analysis is used to understand the manner in which noise influences these transitions and the stability of the system.