Science.gov

Sample records for autonomous vision system

  1. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  2. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  3. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  4. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-06-03

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.

  5. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    PubMed Central

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  6. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  7. Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision.

    PubMed

    Nguyen, Jordan S; Tran, Yvonne; Su, Steven W; Nguyen, Hung T

    2011-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions. PMID:22255649

  8. Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision.

    PubMed

    Nguyen, Jordan S; Tran, Yvonne; Su, Steven W; Nguyen, Hung T

    2011-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions.

  9. Autonomous Hovering and Landing of a Quad-rotor Micro Aerial Vehicle by Means of on Ground Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Pebrianti, Dwi; Kendoul, Farid; Azrad, Syaril; Wang, Wei; Nonami, Kenzo

    On ground stereo vision system is used for autonomous hovering and landing of a quadrotor Micro Aerial Vehicle (MAV). This kind of system has an advantage to support embedded vision system for autonomous hovering and landing, since an embedded vision system occasionally gives inaccurate distance calculation due to either vibration problem or unknown geometry of the landing target. Color based object tracking by using Continuously Adaptive Mean Shift (CAMSHIFT) algorithm was examined. Nonlinear model of quad-rotor MAV and a PID controller were used for autonomous hovering and landing. The result shows that the Camshift based object tracking algorithm has good performance. Additionally, the comparison between the stereo vision system based and GPS based autonomous hovering of a quad-rotor MAV shows that stereo vision system has better performance. The accuracy of the stereo vision system is about 1 meter in the longitudinal and lateral direction when the quad-rotor flies in 6 meters of altitude. In the same experimental condition, the GPS based system accuracy is about 3 meters. Additionally, experiment on autonomous landing gives a reliable result.

  10. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  11. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    PubMed Central

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  12. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  13. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  14. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  15. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  16. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  17. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  18. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  19. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  20. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-12-12

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

  1. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  2. Enhanced and synthetic vision system for autonomous all weather approach and landing

    NASA Astrophysics Data System (ADS)

    Korn, Bernd R.

    2007-04-01

    Within its research project ADVISE-PRO (Advanced visual system for situation awareness enhancement - prototype, 2003 - 2006) that will be presented in this contribution, DLR has combined elements of Enhanced Vision and Synthetic Vision to one integrated system to allow all low visibility operations independently from the infrastructure on ground. The core element of this system is the adequate fusion of all information that is available on-board. This fusion process is organized in a hierarchical manner. The most important subsystems are a) the sensor based navigation which determines the aircraft's position relative to the runway by automatically analyzing sensor data (MMW, IR, radar altimeter) without using neither (D)GPS nor precise knowledge about the airport geometry, b) an integrity monitoring of navigation data and terrain data which verifies on-board navigation data ((D)GPS + INS) with sensor data (MMW-Radar, IR-Sensor, Radar altimeter) and airport / terrain databases, c) an obstacle detection system and finally d) a consistent description of situation and respective HMI for the pilot.

  3. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  4. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  5. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  6. Infrared sensors and systems for enhanced vision/autonomous landing applications

    NASA Technical Reports Server (NTRS)

    Kerr, J. Richard

    1993-01-01

    There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.

  7. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  8. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.

  9. Real-time performance of a hands-free semi-autonomous wheelchair system using a combination of stereoscopic and spherical vision.

    PubMed

    Nguyen, Jordan S; Nguyen, Tuan Nghia; Tran, Yvonne; Su, Steven W; Craig, Ashley; Nguyen, Hung T

    2012-01-01

    This paper is concerned with the operational performance of a semi-autonomous wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. The unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. Combining this vision system with a shared control strategy provides intelligent assistive guidance during wheelchair navigation, and can accompany any hands-free wheelchair control technology for people with severe physical disability. Testing of this system in crowded dynamic environments has displayed the feasibility and real-time performance of this system when assisting hands-free control technologies, in this case being a proof-of-concept brain-computer interface (BCI).

  10. INL Autonomous Navigation System

    SciTech Connect

    2005-03-30

    The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.

  11. Autonomous UV-enhanced-vision system that prevents runway incursions at medium-size airports

    NASA Astrophysics Data System (ADS)

    Norris, Victor J., Jr.

    2001-08-01

    Runway incursions have been declared the nation's foremost aviation safety issue by the National Transportation Safety Board and the Federal Aviation Administration in testimony before congressional aviation committees. Technology solutions to date have been disappointing. After 12 years of development, the frequency of runway incursions shows no sign of abating, even as the cost for such systems continues to rise beyond $DLR9 million per airport. Application of ultraviolet technology offers incremental, low-cost, near- term improvements in runway incursion prevention and other enhancements to aviation safety, as well as increases in airport throughput capability, i.e., a reduction in delays.

  12. Autonomic Nervous System Disorders

    MedlinePlus

    Your autonomic nervous system is the part of your nervous system that controls involuntary actions, such as the beating of your heart ... breathing and swallowing Erectile dysfunction in men Autonomic nervous system disorders can occur alone or as the result ...

  13. Overview of vision-based navigation for autonomous land vehicles 1986. Technical report

    SciTech Connect

    Chandran, S.; Davis, L.S.; Dementhon, D.; Dickenson, S.J.; Gajulapalli, S.

    1987-04-01

    This report describes research performed during the first two years on the project Vision-Based Navigation for Autonomous Vehicles being conducted under DARPA support. The report contains discussion of four main topics: (1) Development of a vision system for autonomous navigation of roads and road network. (2) Support of Martin Marietta Aerospace, Denver, the integrating contractor on DARPA's ALV program. (3) Experimentation with the vision system developed at Maryland on the Martin Marietta ALV, and (4) Development and implementation of parallel algorithms for visual navigation on the parallel computers developed under the DARPA Strategic Computing Program--specifically, the WARP systolic array processor, the Butterfly, and the Connection Machine.

  14. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  15. An autonomous vision-based mobile robot

    NASA Astrophysics Data System (ADS)

    Baumgartner, Eric Thomas

    This dissertation describes estimation and control methods for use in the development of an autonomous mobile robot for structured environments. The navigation of the mobile robot is based on precise estimates of the position and orientation of the robot within its environment. The extended Kalman filter algorithm is used to combine information from the robot's drive wheels with periodic observations of small, wall-mounted, visual cues to produce the precise position and orientation estimates. The visual cues are reliably detected by at least one video camera mounted on the mobile robot. Typical position estimates are accurate to within one inch. A path tracking algorithm is also developed to follow desired reference paths which are taught by a human operator. Because of the time-independence of the tracking algorithm, the speed that the vehicle travels along the reference path is specified independent from the tracking algorithm. The estimation and control methods have been applied successfully to two experimental vehicle systems. Finally, an analysis of the linearized closed-loop control system is performed to study the behavior and the stability of the system as a function of various control parameters.

  16. Highly Autonomous Systems Workshop

    NASA Technical Reports Server (NTRS)

    Doyle, R.; Rasmussen, R.; Man, G.; Patel, K.

    1998-01-01

    It is our aim by launching a series of workshops on the topic of highly autonomous systems to reach out to the larger community interested in technology development for remotely deployed systems, particularly those for exploration.

  17. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  18. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  19. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Simpson, James

    2010-01-01

    The Autonomous Flight Safety System (AFSS) is an independent self-contained subsystem mounted onboard a launch vehicle. AFSS has been developed by and is owned by the US Government. Autonomously makes flight termination/destruct decisions using configurable software-based rules implemented on redundant flight processors using data from redundant GPS/IMU navigation sensors. AFSS implements rules determined by the appropriate Range Safety officials.

  20. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  1. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  2. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1989-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  3. Architecture of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok; Larsen, Ronald L.

    1986-01-01

    Automation of Space Station functions and activities, particularly those involving robotic capabilities with interactive or supervisory human control, is a complex, multi-disciplinary systems design problem. A wide variety of applications using autonomous control can be found in the literature, but none of them seem to address the problem in general. All of them are designed with a specific application in mind. In this report, an abstract model is described which unifies the key concepts underlying the design of automated systems such as those studied by the aerospace contractors. The model has been kept as general as possible. The attempt is to capture all the key components of autonomous systems. With a little effort, it should be possible to map the functions of any specific autonomous system application to the model presented here.

  4. Autonomous Vision-Based Tethered-Assisted Rover Docking

    NASA Technical Reports Server (NTRS)

    Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri

    2013-01-01

    Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.

  5. An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing

    NASA Astrophysics Data System (ADS)

    Zhao, Yunji; Pei, Hailong

    In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.

  6. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.

    1990-01-01

    The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.

  7. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  8. Vision-based sensing for autonomous in-flight refueling

    NASA Astrophysics Data System (ADS)

    Scott, D.; Toal, M.; Dale, J.

    2007-04-01

    A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.

  9. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  10. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  11. Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision

    NASA Astrophysics Data System (ADS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-06-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  12. Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Santuro, Steve; Simpson, James; Zoerner, Roger; Bull, Barton; Lanzi, Jim

    2004-01-01

    Autonomous Flight Safety System (AFSS) is an independent flight safety system designed for small to medium sized expendable launch vehicles launching from or needing range safety protection while overlying relatively remote locations. AFSS replaces the need for a man-in-the-loop to make decisions for flight termination. AFSS could also serve as the prototype for an autonomous manned flight crew escape advisory system. AFSS utilizes onboard sensors and processors to emulate the human decision-making process using rule-based software logic and can dramatically reduce safety response time during critical launch phases. The Range Safety flight path nominal trajectory, its deviation allowances, limit zones and other flight safety rules are stored in the onboard computers. Position, velocity and attitude data obtained from onboard global positioning system (GPS) and inertial navigation system (INS) sensors are compared with these rules to determine the appropriate action to ensure that people and property are not jeopardized. The final system will be fully redundant and independent with multiple processors, sensors, and dead man switches to prevent inadvertent flight termination. AFSS is currently in Phase III which includes updated algorithms, integrated GPS/INS sensors, large scale simulation testing and initial aircraft flight testing.

  13. A design approach for small vision-based autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Edwards, Barrett B.; Fife, Wade S.; Archibald, James K.; Lee, Dah-Jye; Wilde, Doran K.

    2006-10-01

    This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding environment. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.

  14. Hybrid machine vision method for autonomous guided vehicles

    NASA Astrophysics Data System (ADS)

    Lu, Jian; Hamajima, Kyoko; Ishihara, Koji

    2003-05-01

    As a prospective intelligent sensing method for Autonomous Guided Vehicle (AGV), machine vision is expected to have balanced ability of covering a large space and also recognizing details of important objects. For this purpose, the proposed hybrid machine method here combines the stereo vision method and the traditional 2D method. The former implements coarse recognition to extract object over a large space, and the later implement fine recognition about some sub-areas corresponding to important and/or special objects. This paper is mainly about the coarse recognition. In order to extract objects in the coarse recognition stage, the disparity image calculated according to stereo vision principle is segmented by two consequent steps of region expansion and convex split. Then the 3D measurement about the rough positions and sizes of extracted objects is performed according to the disparity information of the corresponding segmentation, and is used for recognizing the objects' attributes by means of pattern learning/recognition. The attribute information resulted is further used to assist fine recognition in the way of performing gaze control to input suitable image of the interested objects, or to directly control AGV's travel. In our example AGV application, some navigation-signs are introduced to indicate the travel route. When the attribute shows that the object is a navigation-sign, the 3D measurement is used to gaze the navigation-sign, in order for the fine recognition to analyze the specific meaning by means of traditional 2D method.

  15. Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter

    NASA Technical Reports Server (NTRS)

    Rock, Stephen M.

    1999-01-01

    This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.

  16. [Quality system Vision 2000].

    PubMed

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  17. CONDOR Advanced Visionics System

    NASA Astrophysics Data System (ADS)

    Kanahele, David L.; Buckanin, Robert M.

    1996-06-01

    The Covert Night/Day Operations for Rotorcraft (CONDOR) program is a collaborative research and development program between the governments of the United States and the United Kingdom of Great Britain and Northern Ireland to develop and demonstrate an advanced visionics concept coupled with an advanced flight control system to improve rotorcraft mission effectiveness during day, night, and adverse weather conditions in the Nap- of-the-Earth environment. The Advanced Visionics System for CONDOR is the flight- ruggedized head mounted display and computer graphics generator with the intended use of exploring, developing, and evaluating proposed visionic concepts for rotorcraft including; the application of color displays, wide field-of-view, enhanced imagery, virtual displays, mission symbology, stereo imagery, and other graphical interfaces.

  18. Autonomous attitude determination systems

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.

    1979-01-01

    A summary of autonomous attitude determination systems is presented by separating it into four areas: types of attitude determination systems which can be automated, a description of the attitude determination problem and its solution, specific types of sensors, and the processor requirements of two automated systems. The sensors used in attitude determination have been characteristically carried on-board the spacecraft in the past, so the major development requirement of automated systems is in the area of on-board processors. It is concluded that standardization of computers is not as beneficial as the standardization of computer architecture and the basic components which go into making them. It is also concluded that charge-coupled devices (CCD) or other solid state star tracking devices offer considerable advantages over the image-dissector type of star tracker.

  19. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  20. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  1. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  2. Cybersecurity for aerospace autonomous systems

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.

  3. Autonomous power system brassboard

    NASA Technical Reports Server (NTRS)

    Merolla, Anthony

    1992-01-01

    The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the

  4. Autonomous power system brassboard

    NASA Astrophysics Data System (ADS)

    Merolla, Anthony

    1992-10-01

    The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the

  5. Algorithmic solution for autonomous vision-based off-road navigation

    NASA Astrophysics Data System (ADS)

    Kolesnik, Marina; Paar, Gerhard; Bauer, Arnold; Ulm, Michael

    1998-07-01

    A vision based navigation system is a basic tool to provide autonomous operations of unmanned vehicles. For offroad navigation that means that the vehicle equipped with a stereo vision system and perhaps a laser ranging device shall be able to maintain a high level of autonomy under various illumination conditions and with little a priori information about the underlying scene. The task becomes particularly important for unmanned planetary exploration with the help of autonomous rovers. For example in the LEDA Moon exploration project currently under focus by the European Space Agency (ESA), during the autonomous mode the vehicle (rover) should perform the following operations: on-board absolute localization, elevation model (DEM) generation, obstacle detection and relative localization, global path planning and execution. Focus of this article is a computational solution for fully autonomous path planning and path execution. An operational DEM generation method based on stereoscopy is introduced. Self-localization on the DEM and robust natural feature tracking are used as basic navigation steps, supported by inertial sensor systems. The following operations are performed on the basis of stereo image sequences: 3D scene reconstruction, risk map generation, local path planning, camera position update during the motion on the basis of landmarks tracking, obstacle avoidance. Experimental verification is done with the help of a laboratory terrain mockup and a high precision camera mounting device. It is shown that standalone tracking using automatically identified landmarks is robust enough to give navigation data for further stereoscopic reconstruction of the surrounding terrain. Iterative tracking and reconstruction leads to a complete description of the vehicle path and its surrounding with an accuracy high enough to meet the specifications for autonomous outdoor navigation.

  6. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  7. Asteroid Exploration with Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Rash, James; Rouff, Christopher; Hinchey, Mike

    2004-01-01

    NASA is studying advanced technologies for a future robotic exploration mission to the asteroid belt. The prospective ANTS (Autonomous Nano Technology Swarm) mission comprises autonomous agents including worker agents (small spacecra3) designed to cooperate in asteroid exploration under the overall authoriq of at least one ruler agent (a larger spacecraft) whose goal is to cause science data to be returned to Earth. The ANTS team (ruler plus workers and messenger agents), but not necessarily any individual on the team, will exhibit behaviors that qualify it as an autonomic system, where an autonomic system is defined as a system that self-reconfigures, self-optimizes, self-heals, and self-protects. Autonomic system concepts lead naturally to realistic, scalable architectures rich in capabilities and behaviors. In-depth consideration of a major mission like ANTS in terms of autonomic systems brings new insights into alternative definitions of autonomic behavior. This paper gives an overview of the ANTS mission and discusses the autonomic properties of the mission.

  8. Vision-directed path planning, navigation, and control for an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Wong, Andrew K. C.; Gao, Song

    1993-05-01

    This paper presents a model and sensor based path planning, navigation and control system for an autonomous mobile robot (AMR) operating in a known laboratory environment. The goal of this research is to enable the AMR to use its on-board computer vision to: (a) locate its own position in the laboratory environment; (b) plan its path; (c) and navigate itself along the planned path. To determine the position and the orientation of the AMR before and during navigation, the vision system has to first recognize and locate the known landmarks, such as doors and columns, in the laboratory. Once the AMR relates its own position and orientation with the world environment, it is able to plan a path to reach a certain prescribed destination. In order to achieve on-line visual feedback, an autonomous target (landmark) acquisition, recognition, and tracking scheme is used. The AMR system is designed and developed to support flexible manufacturing in general, and surveillance and transporting materials in a hazardous environment as well as an autonomous space robotics project funded by MRCO and the Canadian Space Program related to the Freedom Space Station.

  9. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  10. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  11. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    NASA Astrophysics Data System (ADS)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  12. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  13. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  14. Awareness and Responsibility in Autonomous Weapons Systems

    NASA Astrophysics Data System (ADS)

    Bhuta, Nehal; Rotolo, Antonino; Sartor, Giovanni

    The following sections are included: * Introduction * Why Computational Awareness is Important in Autonomous Weapons * Flying Drones and Other Autonomous Weapons * The Impact of Autonomous Weapons Systems * From Autonomy to Awareness: A Perspective from Science Fiction * Summary and Conclusions

  15. vSLAM: vision-based SLAM for autonomous vehicle navigation

    NASA Astrophysics Data System (ADS)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  16. Visions image operating system

    SciTech Connect

    Kohler, R.R.; Hanson, A.R.

    1982-01-01

    The image operating system is a complete software environment specifically designed for dynamic experimentation in scene analysis. The IOS consists of a high-level interpretive control language (LISP) with efficient image operators in a noninterpretive language. The image operators are viewed as local operators to be applied in parallel at all pixels to a set of input images. In order to carry out complex image analysis experiments an environment conducive to such experimentation was needed. This environment is provided by the visions image operating system based on a computational structure known as a processing cone proposed by Hanson and Riseman (1974, 1980) and implemented on a VAX-11/780 running VMS. 6 references.

  17. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  18. Self-adaptive Vision System

    NASA Astrophysics Data System (ADS)

    Stipancic, Tomislav; Jerbic, Bojan

    Light conditions represent an important part of every vision application. This paper describes one active behavioral scheme of one particular active vision system. This behavioral scheme enables an active system to adapt to current environmental conditions by constantly validating the amount of the reflected light using luminance meter and dynamically changed significant vision parameters. The purpose of the experiment was to determine the connections between light conditions and inner vision parameters. As a part of the experiment, Response Surface Methodology (RSM) was used to predict values of vision parameters with respect to luminance input values. RSM was used to approximate an unknown function for which only few values were computed. The main output validation system parameter is called Match Score. Match Score indicates how well the found object matches the learned model. All obtained data are stored in the local database. By timely applying new parameters predicted by the RSM, the vision application works in a stabile and robust manner.

  19. Progress towards autonomous, intelligent systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry; Heer, Ewald

    1987-01-01

    An aggressive program has been initiated to develop, integrate, and implement autonomous systems technologies starting with today's expert systems and evolving to autonomous, intelligent systems by the end of the 1990s. This program includes core technology developments and demonstration projects for technology evaluation and validation. This paper discusses key operational frameworks in the content of systems autonomy applications and then identifies major technological challenges, primarily in artificial intelligence areas. Program content and progress made towards critical technologies and demonstrations that have been initiated to achieve the required future capabilities in the year 2000 era are discussed.

  20. Gas House Autonomous System Monitoring

    NASA Technical Reports Server (NTRS)

    Miller, Luke; Edsall, Ashley

    2015-01-01

    Gas House Autonomous System Monitoring (GHASM) will employ Integrated System Health Monitoring (ISHM) of cryogenic fluids in the High Pressure Gas Facility at Stennis Space Center. The preliminary focus of development incorporates the passive monitoring and eventual commanding of the Nitrogen System. ISHM offers generic system awareness, adept at using concepts rather than specific error cases. As an enabler for autonomy, ISHM provides capabilities inclusive of anomaly detection, diagnosis, and abnormality prediction. Advancing ISHM and Autonomous Operation functional capabilities enhances quality of data, optimizes safety, improves cost effectiveness, and has direct benefits to a wide spectrum of aerospace applications.

  1. Autonomous navigation system for the Marsokhod rover project

    NASA Technical Reports Server (NTRS)

    Proy, C.; Lamboley, M.; Rastel, L.

    1994-01-01

    This paper presents a general overview of the Marsokhod rover mission. The autonomous navigation for a Mars exploration rover is controlled by a vision system which has been developed on the basis of two CCD cameras, stereovision and path planning algorithms. Its performances have been tested on a Mars-like experimentation site.

  2. Contingency Software in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn; Patterson-Hine, Ann

    2006-01-01

    This viewgraph presentation reviews the development of contingency software for autonomous systems. Autonomous vehicles currently have a limited capacity to diagnose and mitigate failures. There is a need to be able to handle a broader range of contingencies. The goals of the project are: 1. Speed up diagnosis and mitigation of anomalous situations.2.Automatically handle contingencies, not just failures.3.Enable projects to select a degree of autonomy consistent with their needs and to incrementally introduce more autonomy.4.Augment on-board fault protection with verified contingency scripts

  3. Intelligent, autonomous systems in space

    NASA Technical Reports Server (NTRS)

    Lum, H.; Heer, E.

    1988-01-01

    The Space Station is expected to be equipped with intelligent, autonomous capabilities; to achieve and incorporate these capabilities, the required technologies need to be identitifed, developed and validated within realistic application scenarios. The critical technologies for the development of intelligent, autonomous systems are discussed in the context of a generalized functional architecture. The present state of this technology implies that it be introduced and applied in an evolutionary process which must start during the Space Station design phase. An approach is proposed to accomplish design information acquisition and management for knowledge-base development.

  4. Testing the autonomic nervous system.

    PubMed

    Freeman, Roy; Chapleau, Mark W

    2013-01-01

    Autonomic testing is used to define the role of the autonomic nervous system in diverse clinical and research settings. Because most of the autonomic nervous system is inaccessible to direct physiological testing, in the clinical setting the most widely used techniques entail the assessment of an end-organ response to a physiological provocation. The noninvasive measures of cardiovascular parasympathetic function involve the assessment of heart rate variability while the measures of cardiovascular sympathetic function assess the blood pressure response to physiological stimuli. Tilt-table testing, with or without pharmacological provocation, has become an important tool in the assessment of a predisposition to neurally mediated (vasovagal) syncope, the postural tachycardia syndrome, and orthostatic hypotension. Distal, postganglionic, sympathetic cholinergic (sudomotor) function may be evaluated by provoking axon reflex mediated sweating, e.g., the quantitative sudomotor axon reflex (QSART) or the quantitative direct and indirect axon reflex (QDIRT). The thermoregulatory sweat test provides a nonlocalizing measure of global pre- and postganglionic sudomotor function. Frequency domain analyses of heart rate and blood pressure variability, microneurography, and baroreflex assessment are currently research tools but may find a place in the clinical assessment of autonomic function in the future. PMID:23931777

  5. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  6. Autonomous spacecraft landing through human pre-attentive vision.

    PubMed

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E

    2012-06-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting. PMID:22617300

  7. Autonomous spacecraft landing through human pre-attentive vision.

    PubMed

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E

    2012-06-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.

  8. Semi autonomous mine detection system

    SciTech Connect

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIK was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.

  9. Semi autonomous mine detection system

    NASA Astrophysics Data System (ADS)

    Few, Doug; Versteeg, Roelof; Herman, Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIK was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude - from an autonomous robotic perspective - the rapid development and deployment of fieldable systems.

  10. Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball

    PubMed Central

    Lin, Junqin; Han, Baoling; Luo, Qingsheng

    2015-01-01

    This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. PMID:26057040

  11. Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball.

    PubMed

    Lin, Junqin; Han, Baoling; Luo, Qingsheng

    2015-01-01

    This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. PMID:26057040

  12. Knowledge acquisition for autonomous systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry; Heer, Ewald

    1988-01-01

    Knowledge-based capabilities for autonomous aerospace systems, such as the NASA Space Station, must encompass conflict-resolution functions comparable to those of human operators, with all elements of the system working toward system goals in a concurrent, asynchronous-but-coordinated fashion. Knowledge extracted from a design database will support robotic systems by furnishing geometric, structural, and causal descriptions required for repair, disassembly, and assembly. The factual knowledge for these databases will be obtained from a master database through a technical management information system, and it will in many cases have to be augmented by domain-specific heuristic knowledge acquired from domain experts.

  13. Stereo-vision framework for autonomous vehicle guidance and collision avoidance

    NASA Astrophysics Data System (ADS)

    Scott, Douglas A.

    2003-08-01

    During a pre-programmed course to a particular destination, an autonomous vehicle may potentially encounter environments that are unknown at the time of operation. Some regions may contain objects or vehicles that were not anticipated during the mission-planning phase. Often user-intervention is not possible or desirable under these circumstances. Thus it is required for the onboard navigation system to automatically make short-term adjustments to the flight plan and to apply the necessary course corrections. A suitable path is visually navigated through the environment to reliably avoid obstacles without significant deviations from the original course. This paper describes a general low-cost stereo-vision sensor framework, for passively estimating the range-map between a forward-looking autonomous vehicle and its environment. Typical vehicles may be either unmanned ground or airborne vehicles. The range-map image describes a relative distance from the vehicle to the observed environment and contains information that could be used to compute a navigable flight plan, and also visual and geometric detail about the environment for other onboard processes or future missions. Aspects relating to information flow through the framework are discussed, along with issues such as robustness, implementation and other advantages and disadvantages of the framework. An outline of the physical structure of the system is presented and an overview of the algorithms and applications of the framework are given.

  14. Multi-agent autonomous system

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.

  15. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; Doggett, Thomas; Ip, Felipe; Greeley, Ron; Baker, Victor; Dohn, James; Boyer, Darrell

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  16. The Autonomous Pathogen Detection System

    SciTech Connect

    Dzenitis, J M; Makarewicz, A J

    2009-01-13

    We developed, tested, and now operate a civilian biological defense capability that continuously monitors the air for biological threat agents. The Autonomous Pathogen Detection System (APDS) collects, prepares, reads, analyzes, and reports results of multiplexed immunoassays and multiplexed PCR assays using Luminex{copyright} xMAP technology and flow cytometer. The mission we conduct is particularly demanding: continuous monitoring, multiple threat agents, high sensitivity, challenging environments, and ultimately extremely low false positive rates. Here, we introduce the mission requirements and metrics, show the system engineering and analysis framework, and describe the progress to date including early development and current status.

  17. Coherent laser vision system (CLVS)

    SciTech Connect

    1997-02-13

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometric data on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Base Program Interim Phase Topical Report presenting the results of Phase 1 of the CLVS research project. Test results and demonstration results provide a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometric data on the order of once per second.

  18. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  19. APDS: Autonomous Pathogen Detection System

    SciTech Connect

    Langlois, R G; Brown, S; Burris, L; Colston, B; Jones, L; Makarewicz, T; Mariella, R; Masquelier, D; McBride, M; Milanovich, F; Masarabadi, S; Venkateswaran, K; Marshall, G; Olson, D; Wolcott, D

    2002-02-14

    An early warning system to counter bioterrorism, the Autonomous Pathogen Detection System (APDS) continuously monitors the environment for the presence of biological pathogens (e.g., anthrax) and once detected, it sounds an alarm much like a smoke detector warns of a fire. Long before September 11, 2001, this system was being developed to protect domestic venues and events including performing arts centers, mass transit systems, major sporting and entertainment events, and other high profile situations in which the public is at risk of becoming a target of bioterrorist attacks. Customizing off-the-shelf components and developing new components, a multidisciplinary team developed APDS, a stand-alone system for rapid, continuous monitoring of multiple airborne biological threat agents in the environment. The completely automated APDS samples the air, prepares fluid samples in-line, and performs two orthogonal tests: immunoassay and nucleic acid detection. When compared to competing technologies, APDS is unprecedented in terms of flexibility and system performance.

  20. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  1. Real-time 3D vision solution for on-orbit autonomous rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Ruel, S.; English, C.; Anctil, M.; Daly, J.; Smith, C.; Zhu, S.

    2006-05-01

    Neptec has developed a vision system for the capture of non-cooperative objects on orbit. This system uses an active TriDAR sensor and a model based tracking algorithm to provide 6 degree of freedom pose information in real-time from mid range to docking. This system was selected for the Hubble Robotic Vehicle De-orbit Module (HRVDM) mission and for a Detailed Test Objective (DTO) mission to fly on the Space Shuttle. TriDAR (triangulation + LIDAR) technology makes use of a novel approach to 3D sensing by combining triangulation and Time-of-Flight (ToF) active ranging techniques in the same optical path. This approach exploits the complementary nature of these sensing technologies. Real-time tracking of target objects is accomplished using 3D model based tracking algorithms developed at Neptec in partnership with the Canadian Space Agency (CSA). The system provides 6 degrees of freedom pose estimation and incorporates search capabilities to initiate and recover tracking. Pose estimation is performed using an innovative approach that is faster than traditional techniques. This performance allows the algorithms to operate in real-time on the TriDAR's flight certified embedded processor. This paper presents results from simulation and lab testing demonstrating that the system's performance meets the requirements of a complete tracking system for on-orbit autonomous rendezvous and docking.

  2. Autonomous pathogen detection system 2001

    SciTech Connect

    Langlois, R G; Wang, A; Colston, B; Masquelier, D; Jones, L; Venkateswaran, K S; Nasarabadi, S; Brown, S; Ramponi, A; Milanovich, F P

    2001-01-09

    The objective of this project is to design, fabricate and field-demonstrate a fully Autonomous Pathogen Detector (identifier) System (APDS). This will be accomplished by integrating a proven flow cytometer and real-time polymerase chain reaction (PCR) detector with sample collection, sample preparation and fluidics to provide a compact, autonomously operating instrument capable of simultaneously detecting multiple pathogens and/or toxins. The APDS will be designed to operate in fixed locations, where it continuously monitors air samples and automatically reports the presence of specific biological agents. The APDS will utilize both multiplex immuno and nucleic acid assays to provide ''quasi-orthogonal'', multiple agent detection approaches to minimize false positives and increase the reliability of identification. Technical advancements across several fronts must first be made in order to realize the full extent of the APDS. Commercialization will be accomplished through three progressive generations of instruments. The APDS is targeted for domestic applications in which (1) the public is at high risk of exposure to covert releases of bioagent such as in major subway systems and other transportation terminals, large office complexes, and convention centers; and (2) as part of a monitoring network of sensors integrated with command and control systems for wide area monitoring of urban areas and major gatherings (e.g., inaugurations, Olympics, etc.). In this latter application there is potential that a fully developed APDS could add value to Defense Department monitoring architectures.

  3. Autonomous Biological System (ABS) experiments.

    PubMed

    MacCallum, T K; Anderson, G A; Poynter, J E; Stodieck, L S; Klaus, D M

    1998-12-01

    Three space flight experiments have been conducted to test and demonstrate the use of a passively controlled, materially closed, bioregenerative life support system in space. The Autonomous Biological System (ABS) provides an experimental environment for long term growth and breeding of aquatic plants and animals. The ABS is completely materially closed, isolated from human life support systems and cabin atmosphere contaminants, and requires little need for astronaut intervention. Testing of the ABS marked several firsts: the first aquatic angiosperms to be grown in space; the first higher organisms (aquatic invertebrate animals) to complete their life cycles in space; the first completely bioregenerative life support system in space; and, among the first gravitational ecology experiments. As an introduction this paper describes the ABS, its flight performance, advantages and disadvantages.

  4. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  5. An Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Bull, James B.; Lanzi, Raymond J.

    2007-01-01

    The Autonomous Flight Safety System (AFSS) being developed by NASA s Goddard Space Flight Center s Wallops Flight Facility and Kennedy Space Center has completed two successful developmental flights and is preparing for a third. AFSS has been demonstrated to be a viable architecture for implementation of a completely vehicle based system capable of protecting life and property in event of an errant vehicle by terminating the flight or initiating other actions. It is capable of replacing current human-in-the-loop systems or acting in parallel with them. AFSS is configured prior to flight in accordance with a specific rule set agreed upon by the range safety authority and the user to protect the public and assure mission success. This paper discusses the motivation for the project, describes the method of development, and presents an overview of the evolving architecture and the current status.

  6. Testbed for an autonomous system

    NASA Technical Reports Server (NTRS)

    Dikshit, Piyush; Guimaraes, Katia; Ramamurthy, Maya; Agrawala, Ashok K.; Larsen, Ronald L.

    1989-01-01

    In previous works we have defined a general architectural model for autonomous systems, which can easily be mapped to describe the functions of any automated system (SDAG-86-01), and we illustrated that model by applying it to the thermal management system of a space station (SDAG-87-01). In this note, we will further develop that application and design the detail of the implementation of such a model. First we present the environment of our application by describing the thermal management problem and an abstraction, which was called TESTBED, that includes a specific function for each module in the architecture, and the nature of the interfaces between each pair of blocks.

  7. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  8. Semi-autonomous unmanned ground vehicle control system

    NASA Astrophysics Data System (ADS)

    Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James

    2006-05-01

    Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.

  9. Information for Successful Interaction with Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Johnson, Kathy A.

    2003-01-01

    Interaction in heterogeneous mission operations teams is not well matched to classical models of coordination with autonomous systems. We describe methods of loose coordination and information management in mission operations. We describe an information agent and information management tool suite for managing information from many sources, including autonomous agents. We present an integrated model of levels of complexity of agent and human behavior, which shows types of information processing and points of potential error in agent activities. We discuss the types of information needed for diagnosing problems and planning interactions with an autonomous system. We discuss types of coordination for which designs are needed for autonomous system functions.

  10. Stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Király, Zsolt; Springer, George S.; Van Dam, Jacques

    2006-04-01

    In this investigation, an optical system is introduced for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system wirelessly transmits stereoscopic video to a computer that displays the video in realtime on the screen, where it is viewed with shutter glasses. To minimize space requirements, the videos from the two cameras (required to produce stereoscopic images) are multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms are developed that enable the system to perform these tasks. A proof-of-concept device is constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests are performed assessing validities of the concepts and the algorithms.

  11. Autonomous power system intelligent diagnosis and control

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony

    1991-01-01

    The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.

  12. Autonomous navigation system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  13. Autonomic nervous system and immune system interactions.

    PubMed

    Kenney, M J; Ganta, C K

    2014-07-01

    The present review assesses the current state of literature defining integrative autonomic-immune physiological processing, focusing on studies that have employed electrophysiological, pharmacological, molecular biological, and central nervous system experimental approaches. Central autonomic neural networks are informed of peripheral immune status via numerous communicating pathways, including neural and non-neural. Cytokines and other immune factors affect the level of activity and responsivity of discharges in sympathetic and parasympathetic nerves innervating diverse targets. Multiple levels of the neuraxis contribute to cytokine-induced changes in efferent parasympathetic and sympathetic nerve outflows, leading to modulation of peripheral immune responses. The functionality of local sympathoimmune interactions depends on the microenvironment created by diverse signaling mechanisms involving integration between sympathetic nervous system neurotransmitters and neuromodulators; specific adrenergic receptors; and the presence or absence of immune cells, cytokines, and bacteria. Functional mechanisms contributing to the cholinergic anti-inflammatory pathway likely involve novel cholinergic-adrenergic interactions at peripheral sites, including autonomic ganglion and lymphoid targets. Immune cells express adrenergic and nicotinic receptors. Neurotransmitters released by sympathetic and parasympathetic nerve endings bind to their respective receptors located on the surface of immune cells and initiate immune-modulatory responses. Both sympathetic and parasympathetic arms of the autonomic nervous system are instrumental in orchestrating neuroimmune processes, although additional studies are required to understand dynamic and complex adrenergic-cholinergic interactions. Further understanding of regulatory mechanisms linking the sympathetic nervous, parasympathetic nervous, and immune systems is critical for understanding relationships between chronic disease

  14. System Engineering of Autonomous Space Vehicles

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis

    2014-01-01

    Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.

  15. Vision system for telerobotics operation

    NASA Astrophysics Data System (ADS)

    Wong, Andrew K. C.; Li, Li-Wei; Liu, Wei-Cheng

    1992-10-01

    This paper presents a knowledge-based vision system for a telerobotics guidance project. The system is capable of recognizing and locating 3-D objects with unrestricted viewpoints in a simulated unconstrained space environment. It constructs object representation for vision tasks from wireframe models; recognizes and locates objects in a 3-D scene, and provides world modeling capability to establish, maintain, and update 3-D environment description for telerobotic manipulations. In this paper, an object model is represented by an attributed hypergraph which contains direct structural (relational) information with features grouped according to their multiple-views so as the interpretation of the 3-D object and its 2-D projections are coupled. With this representation, object recognition is directed by a knowledge-directed hypothesis refinement strategy. The strategy starts with the identification of 2-D local feature characteristics for initiating feature and relation matching. Next it continues to refine the matching by adding 2-D features from the image according to viewpoint and geometric consistency. Finally it links the successful matchings back to the 3-D model to recover the feature, relation and location information of the recognized object. The paper also presents the implementation and the experimentation of the vision prototype.

  16. Autonomous Operations System: Development and Application

    NASA Technical Reports Server (NTRS)

    Toro Medina, Jaime A.; Wilkins, Kim N.; Walker, Mark; Stahl, Gerald M.

    2016-01-01

    Autonomous control systems provides the ability of self-governance beyond the conventional control system. As the complexity of mechanical and electrical systems increases, there develops a natural drive for developing robust control systems to manage complicated operations. By closing the bridge between conventional automated systems to knowledge based self-awareness systems, nominal control of operations can evolve into relying on safe critical mitigation processes to support any off-nominal behavior. Current research and development efforts lead by the Autonomous Propellant Loading (APL) group at NASA Kennedy Space Center aims to improve cryogenic propellant transfer operations by developing an automated control and health monitoring system. As an integrated systems, the center aims to produce an Autonomous Operations System (AOS) capable of integrating health management operations with automated control to produce a fully autonomous system.

  17. Autonomous intelligent cruise control system

    NASA Astrophysics Data System (ADS)

    Baret, Marc; Bomer, Thierry T.; Calesse, C.; Dudych, L.; L'Hoist, P.

    1995-01-01

    Autonomous intelligent cruise control (AICC) systems are not only controlling vehicles' speed but acting on the throttle and eventually on the brakes they could automatically maintain the relative speed and distance between two vehicles in the same lane. And more than just for comfort it appears that these new systems should improve the safety on highways. By applying a technique issued from the space research carried out by MATRA, a sensor based on a charge coupled device (CCD) was designed to acquire the reflected light on standard-mounted car reflectors of pulsed laser diodes emission. The CCD is working in a unique mode called flash during transfer (FDT) which allows identification of target patterns in severe optical environments. It provides high accuracy for distance and angular position of targets. The absence of moving mechanical parts ensures high reliability for this sensor. The large field of view and the high measurement rate give a global situation assessment and a short reaction time. Then, tracking and filtering algorithms have been developed in order to select the target, on which the equipped vehicle determines its safety distance and speed, taking into account its maneuvering and the behaviors of other vehicles.

  18. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  19. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  20. The Secure, Transportable, Autonomous Reactor System

    SciTech Connect

    Brown, N.W.; Hassberger, J.A.; Smith, C.; Carelli, M.; Greenspan, E.; Peddicord, K.L.; Stroh, K.; Wade, D.C.; Hill, R.N.

    1999-05-27

    The Secure, Transportable, Autonomous Reactor (STAR) system is a development architecture for implementing a small nuclear power system, specifically aimed at meeting the growing energy needs of much of the developing world. It simultaneously provides very high standards for safety, proliferation resistance, ease and economy of installation, operation, and ultimate disposition. The STAR system accomplishes these objectives through a combination of modular design, factory manufacture, long lifetime without refueling, autonomous control, and high reliability.

  1. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  2. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  3. An introduction to autonomous control systems

    NASA Technical Reports Server (NTRS)

    Antsaklis, Panos J.; Passino, Kevin M.; Wang, S. J.

    1991-01-01

    The functions, characteristics, and benefits of autonomous control are outlined. An autonomous control functional architecture for future space vehicles that incorporates the concepts and characteristics described is presented. The controller is hierarchical, with an execution level (the lowest level), coordination level (middle level), and management and organization level (highest level). The general characteristics of the overall architecture, including those of the three levels, are explained, and an example to illustrate their functions is given. Mathematical models for autonomous systems, including 'logical' discrete event system models, are discussed. An approach to the quantitative, systematic modeling, analysis, and design of autonomous controllers is also discussed. It is a hybrid approach since it uses conventional analysis techniques based on difference and differential equations and new techniques for the analysis of the systems described with a symbolic formalism such as finite automata. Some recent results from the areas of planning and expert systems, machine learning, artificial neural networks, and the area restructurable controls are briefly outlined.

  4. A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

    NASA Astrophysics Data System (ADS)

    Leishman, Robert C.

    Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control

  5. Musca domestica inspired machine vision system with hyperacuity

    NASA Astrophysics Data System (ADS)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  6. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  7. Autonomous control systems - Architecture and fundamental issues

    NASA Technical Reports Server (NTRS)

    Antsaklis, P. J.; Passino, K. M.; Wang, S. J.

    1988-01-01

    A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).

  8. Comparative anatomy of the autonomic nervous system.

    PubMed

    Nilsson, Stefan

    2011-11-16

    This short review aims to point out the general anatomical features of the autonomic nervous systems of non-mammalian vertebrates. In addition it attempts to outline the similarities and also the increased complexity of the autonomic nervous patterns from fish to tetrapods. With the possible exception of the cyclostomes, perhaps the most striking feature of the vertebrate autonomic nervous system is the similarity between the vertebrate classes. An evolution of the complexity of the system can be seen, with the segmental ganglia of elasmobranchs incompletely connected longitudinally, while well developed paired sympathetic chains are present in teleosts and the tetrapods. In some groups the sympathetic chains may be reduced (dipnoans and caecilians), and have yet to be properly described in snakes. Cranial autonomic pathways are present in the oculomotor (III) and vagus (X) nerves of gnathostome fish and the tetrapods, and with the evolution of salivary and lachrymal glands in the tetrapods, also in the facial (VII) and glossopharyngeal (IX) nerves.

  9. Autonomous underwater pipeline monitoring navigation system

    NASA Astrophysics Data System (ADS)

    Mitchell, Byrel; Mahmoudian, Nina; Meadows, Guy

    2014-06-01

    This paper details the development of an autonomous motion-control and navigation algorithm for an underwater autonomous vehicle, the Ocean Server IVER3, to track long linear features such as underwater pipelines. As part of this work, the Nonlinear and Autonomous Systems Laboratory (NAS Lab) developed an algorithm that utilizes inputs from the vehicles state of the art sensor package, which includes digital imaging, digital 3-D Sidescan Sonar, and Acoustic Doppler Current Profilers. The resulting algorithms should tolerate real-world waterway with episodic strong currents, low visibility, high sediment content, and a variety of small and large vessel traffic.

  10. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  11. Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments

    NASA Astrophysics Data System (ADS)

    Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.

    2011-03-01

    Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve

  12. Lessons learned from the Autonomous Power System

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony

    1992-01-01

    The Autonomous Power System (APS) project at the NASA Lewis Research Center is designed to demonstrate the applications of integrated intelligent diagnosis, control and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The APS project had been through one design iteration. Each of the three elements of the APS project has been designed, tested, and integrated into a complete working system. After these three portions were completed, an evaluation period was initiated. Each piece of the system was critiqued based on individual performance as well as the ability to interact with the other portions of the APS project. These critiques were then used to determine guidelines for new and improved components of the APS system.

  13. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  14. Autonomous Attitude Determination System (AADS). Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Saralkar, K.; Frenkel, Y.; Klitsch, G.; Liu, K. S.; Lefferts, E.; Tasaki, K.; Snow, F.; Garrahan, J.

    1982-01-01

    Information necessary to understand the Autonomous Attitude Determination System (AADS) is presented. Topics include AADS requirements, program structure, algorithms, and system generation and execution.

  15. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... Transportation (DOT). ACTION: Notice of RTCA Special Committee 213, Enhanced Flight Vision/ Synthetic Vision... meeting of RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS)....

  16. Sensorpedia: Information Sharing Across Autonomous Sensor Systems

    SciTech Connect

    Gorman, Bryan L; Resseguie, David R; Tomkins-Tinch, Christopher H

    2009-01-01

    The concept of adapting social media technologies is introduced as a means of achieving information sharing across autonomous sensor systems. Historical examples of interoperability as an underlying principle in loosely-coupled systems is compared and contrasted with corresponding tightly-coupled, integrated systems. Examples of ad hoc information sharing solutions based on Web 2.0 social networks, mashups, blogs, wikis, and data tags are presented and discussed. The underlying technologies of these solutions are isolated and defined, and Sensorpedia is presented as a formalized application for implementing sensor information sharing across large-scale enterprises with incompatible autonomous sensor systems.

  17. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. PMID:19651459

  18. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers.

  19. Part identification in robotic assembly using vision system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system acts an important role in making robotic assembly system autonomous. Identification of the correct part is an important task which needs to be carefully done by a vision system to feed the robot with correct information for further processing. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Interest point detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus it needs to choose the correct tool for the process with respect to the given environment. In this paper analysis of three major corner detection algorithms is performed on the basis of their accuracy, speed and robustness to noise. The work is performed on the Matlab R2012a. An attempt has been made to find the best algorithm for the problem.

  20. A production peripheral vision display system

    NASA Technical Reports Server (NTRS)

    Heinmiller, B.

    1984-01-01

    A small number of peripheral vision display systems in three significantly different configurations were evaluated in various aircraft and simulator situations. The use of these development systems enabled the gathering of much subjective and quantitative data regarding this concept of flight deck instrumentation. However, much was also learned about the limitations of this equipment which needs to be addressed prior to wide spread use. A program at Garrett Manufacturing Limited in which the peripheral vision display system is redesigned and transformed into a viable production avionics system is discussed. Modular design, interchangeable units, optical attenuators, and system fault detection are considered with respect to peripheral vision display systems.

  1. Vision guided landing of an an autonomous helicopter in hazardous terrain

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Montgomery, Jim

    2005-01-01

    Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.

  2. Development of a vision system for an intelligent ground vehicle

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth; Stone, Robert B.; McAdams, Daniel A.

    2009-01-01

    The development of a vision system for an autonomous ground vehicle designed and constructed for the Intelligent Ground Vehicle Competition (IGVC) is discussed. The requirements for the vision system of the autonomous vehicle are explored via functional analysis considering the flows (materials, energies and signals) into the vehicle and the changes required of each flow within the vehicle system. Functional analysis leads to a vision system based on a laser range finder (LIDAR) and a camera. Input from the vision system is processed via a ray-casting algorithm whereby the camera data and the LIDAR data are analyzed as a single array of points representing obstacle locations, which for the IGVC, consist of white lines on the horizontal plane and construction markers on the vertical plane. Functional analysis also leads to a multithreaded application where the ray-casting algorithm is a single thread of the vehicle's software, which consists of multiple threads controlling motion, providing feedback, and processing the data from the camera and LIDAR. LIDAR data is collected as distances and angles from the front of the vehicle to obstacles. Camera data is processed using an adaptive threshold algorithm to identify color changes within the collected image; the image is also corrected for camera angle distortion, adjusted to the global coordinate system, and processed using least-squares method to identify white boundary lines. Our IGVC robot, MAX, is utilized as the continuous example for all methods discussed in the paper. All testing and results provided are based on our IGVC robot, MAX, as well.

  3. Passive millimeter wave camera for enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Shoucri, Merit; Dow, G. Samuel; Fornaca, Steven W.; Hauss, Bruce I.; Yujiri, Larry; Shannon, James; Summers, Leland

    1996-05-01

    Passive millimeter wave (PMMW) sensors have been proposed as forward vision sensors for enhanced vision systems used in low visibility aircraft landing. This work reports on progress achieved to date in the development and manufacturing of a demonstration PMMW camera. The unit is designed to be ground and flight tested starting 1996. The camera displays on a head-up or head-down display unit a real time true image of the forward scene. With appropriate head-up symbology and accurate navigation guidance provided by global positioning satellite receivers on-board the aircraft, pilots can autonomously (without ground assist) execute category 3 low visibility take-offs and landings on non-equipped runways. We shall discuss utility of fielding these systems to airlines and other users.

  4. Autonomic dysreflexia

    MedlinePlus

    Autonomic hyperreflexia; Spinal cord injury - autonomic dysreflexia; SCI - autonomic dysreflexia ... most common cause of autonomic dysreflexia (AD) is spinal cord injury. The nervous system of people with AD ...

  5. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  6. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    NASA Astrophysics Data System (ADS)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  7. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions

    PubMed Central

    Clayton, Byron C.

    2015-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406

  8. Shared vision and autonomous motivation vs. financial incentives driving success in corporate acquisitions.

    PubMed

    Clayton, Byron C

    2014-01-01

    Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406

  9. Far and proximity maneuvers of a constellation of service satellites and autonomous pose estimation of customer satellite using machine vision

    NASA Astrophysics Data System (ADS)

    Arantes, Gilberto, Jr.; Marconi Rocco, Evandro; da Fonseca, Ijar M.; Theil, Stephan

    2010-05-01

    Space robotics has a substantial interest in achieving on-orbit satellite servicing operations autonomously, e.g. rendezvous and docking/berthing (RVD) with customer and malfunctioning satellites. An on-orbit servicing vehicle requires the ability to estimate the position and attitude in situations whenever the targets are uncooperative. Such situation comes up when the target is damaged. In this context, this work presents a robust autonomous pose system applied to RVD missions. Our approach is based on computer vision, using a single camera and some previous knowledge of the target, i.e. the customer spacecraft. A rendezvous analysis mission tool for autonomous service satellite has been developed and presented, for far maneuvers, e.g. distance above 1 km from the target, and close maneuvers. The far operations consist of orbit transfer using the Lambert formulation. The close operations include the inspection phase (during which the pose estimation is computed) and the final approach phase. Our approach is based on the Lambert problem for far maneuvers and the Hill equations are used to simulate and analyze the approaching and final trajectory between target and chase during the last phase of the rendezvous operation. A method for optimally estimating the relative orientation and position between camera system and target is presented in detail. The target is modelled as an assembly of points. The pose of the target is represented by dual quaternion in order to develop a simple quadratic error function in such a way that the pose estimation task becomes a least square minimization problem. The problem of pose is solved and some methods of non-linear square optimization (Newton, Newton-Gauss, and Levenberg-Marquard) are compared and discussed in terms of accuracy and computational cost.

  10. Vision based control of unmanned aerial vehicles with applications to an autonomous four-rotor helicopter, quadrotor

    NASA Astrophysics Data System (ADS)

    Altug, Erdinc

    Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.

  11. Advanced Autonomous Systems for Space Operations

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Muscettola, N.; Barrett, A.; Mjolssness, E.; Clancy, D. J.

    2002-01-01

    New missions of exploration and space operations will require unprecedented levels of autonomy to successfully accomplish their objectives. Inherently high levels of complexity, cost, and communication distances will preclude the degree of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of not only meeting the greatly increased space exploration requirements, but simultaneously dramatically reducing the design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health management capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of advanced space operations, since the science and operational requirements specified by such missions, as well as the budgetary constraints will limit the current practice of monitoring and controlling missions by a standing army of ground-based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such on-board systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communication` distances as are not

  12. Autonomic nervous system functions in obese children.

    PubMed

    Yakinci, C; Mungen, B; Karabiber, H; Tayfun, M; Evereklioglu, C

    2000-05-01

    Childhood obesity is a complex syndrome, probably due to the multiplicity of contributing factors, contradictory literature information about etiology, prognosis, prevention and treatment. In the recent reports, autonomic nervous system (ANS) dysfunction has been documented in adult obesity. Autonomic nervous system functions in obese children are not clear. This study was planned to investigate autonomic nervous system function in childhood (7-13 years of age) obesity. Study and control groups consisted of 33 simple obese (23 boys and ten girls, mean age 9.5+/-1.4 years) and 30 healthy children (18 boys and 12 girls, mean age 10.1+/-1.8 years), respectively. Four non-invasive autonomic nervous system function tests (Orthostatic test, Valsalva ratio, 30/15 ratio, Heart rate responses to deep breathing) and general ophthalmic examination were performed on both groups. The difference between the obese and control groups was found statistically significant in Valsalva ratio, 30/15 ratio and Heart rate responses to deep breathing (P<0.025), and insignificant in Orthostatic test (P>0.05). Ophthalmic examinations were normal. The result of these tests suggested normal activity of sympathetic, and hypoactivity of parasympathetic nervous system, implying parasympathetic nervous system dysfunction as a risk factor or associated finding in childhood obesity. PMID:10814895

  13. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  14. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  15. Advances in autonomous systems for space exploration missions

    NASA Technical Reports Server (NTRS)

    Smith, B. D.; Gross, A. R.; Clancy, D. J.; Cannon, H. N.; Barrett, A.; Mjolssness, E.; Muscettola, N.; Chien, S.; Johnson, A.

    2001-01-01

    This paper focuses on new and innovative software for remote, autonomous, space systems flight operation, including distributed autonomous systems, flight test results, and implications and directions for future systems.

  16. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  17. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  18. Autonomous system for launch vehicle range safety

    NASA Astrophysics Data System (ADS)

    Ferrell, Bob; Haley, Sam

    2001-02-01

    The Autonomous Flight Safety System (AFSS) is a launch vehicle subsystem whose ultimate goal is an autonomous capability to assure range safety (people and valuable resources), flight personnel safety, flight assets safety (recovery of valuable vehicles and cargo), and global coverage with a dramatic simplification of range infrastructure. The AFSS is capable of determining current vehicle position and predicting the impact point with respect to flight restriction zones. Additionally, it is able to discern whether or not the launch vehicle is an immediate threat to public safety, and initiate the appropriate range safety response. These features provide for a dramatic cost reduction in range operations and improved reliability of mission success. .

  19. An Expert System for Autonomous Spacecraft Control

    NASA Technical Reports Server (NTRS)

    Sherwood, Rob; Chien, Steve; Tran, Daniel; Cichy, Benjamin; Castano, Rebecca; Davies, Ashley; Rabideau, Gregg

    2005-01-01

    The Autonomous Sciencecraft Experiment (ASE), part of the New Millennium Space Technology 6 Project, is flying onboard the Earth Orbiter 1 (EO-1) mission. The ASE software enables EO-1 to autonomously detect and respond to science events such as: volcanic activity, flooding, and water freeze/thaw. ASE uses classification algorithms to analyze imagery onboard to detect chang-e and science events. Detection of these events is then used to trigger follow-up imagery. Onboard mission planning software then develops a response plan that accounts for target visibility and operations constraints. This plan is then executed using a task execution system that can deal with run-time anomalies. In this paper we describe the autonomy flight software and how it enables a new paradigm of autonomous science and mission operations. We will also describe the current experiment status and future plans.

  20. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  1. [Emotion, amygdala, and autonomic nervous system].

    PubMed

    Ueyama, Takashi

    2012-10-01

    Emotion refers to the dynamic changes of feeling accompanied by the alteration of physical and visceral activities. Autonomic nervous system (sympathetic and parasympathetic) regulates the visceral activities. Therefore, monitoring and analyzing autonomic nervous activity help understand the emotional changes. To this end, the survey of the expression of immediate early genes (IEGs), such as c-Fos in the brain and target organs, and the viral transneuronal labeling method using the pseudorabies virus (PRV) have enabled the visualization of the neurocircuitry of emotion. By comparing c-Fos expression and data from PRV or other neuroanatomical labeling techniques, the central sites that regulate emotional stress-induced autonomic activation can be deduced. Such regions have been identified in the limbic system (e. g., the extended amygdaloid complex; lateral septum; and infralimbic, insular, and ventromedial temporal cortical regions), as well as in several hypothalamic and brainstem nuclei. The amygdala is structurally diverse and comprises several subnuclei, which play a role in emotional process via projections from the cortex and a variety of subcortical structures. All amygdaloid subnuclei receive psychological information from other limbic systems, while the lateral and central subnuclei receive peripheral and sensory information. Output to the hypothalamus and peripheral sympathetic system mainly originates from the medial amygdala. As estrogen receptor α, estrogen receptor β, and androgen receptor are expressed in the medial amygdala, sex steroids may modulate the autonomic nervous activities.

  2. Comparative anatomy of the autonomic nervous system.

    PubMed

    Nilsson, Stefan

    2011-11-16

    This short review aims to point out the general anatomical features of the autonomic nervous systems of non-mammalian vertebrates. In addition it attempts to outline the similarities and also the increased complexity of the autonomic nervous patterns from fish to tetrapods. With the possible exception of the cyclostomes, perhaps the most striking feature of the vertebrate autonomic nervous system is the similarity between the vertebrate classes. An evolution of the complexity of the system can be seen, with the segmental ganglia of elasmobranchs incompletely connected longitudinally, while well developed paired sympathetic chains are present in teleosts and the tetrapods. In some groups the sympathetic chains may be reduced (dipnoans and caecilians), and have yet to be properly described in snakes. Cranial autonomic pathways are present in the oculomotor (III) and vagus (X) nerves of gnathostome fish and the tetrapods, and with the evolution of salivary and lachrymal glands in the tetrapods, also in the facial (VII) and glossopharyngeal (IX) nerves. PMID:20444653

  3. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  4. Exercise and the autonomic nervous system.

    PubMed

    Fu, Qi; Levine, Benjamin D

    2013-01-01

    The autonomic nervous system plays a crucial role in the cardiovascular response to acute (dynamic) exercise in animals and humans. During exercise, oxygen uptake is a function of the triple-product of heart rate and stroke volume (i.e., cardiac output) and arterial-mixed venous oxygen difference (the Fick principle). The degree to which each of the variables can increase determines maximal oxygen uptake (V˙O2max). Both "central command" and "the exercise pressor reflex" are important in determining the cardiovascular response and the resetting of the arterial baroreflex during exercise to precisely match systemic oxygen delivery with metabolic demand. In general, patients with autonomic disorders have low levels of V˙O2max, indicating reduced physical fitness and exercise capacity. Moreover, the vast majority of the patients have blunted or abnormal cardiovascular response to exercise, especially during maximal exercise. There is now convincing evidence that some of the protective and therapeutic effects of chronic exercise training are related to the impact on the autonomic nervous system. Additionally, training induced improvement in vascular function, blood volume expansion, cardiac remodeling, insulin resistance and renal-adrenal function may also contribute to the protection and treatment of cardiovascular, metabolic and autonomic disorders. Exercise training also improves mental health, helps to prevent depression, and promotes or maintains positive self-esteem. Moderate-intensity exercise at least 30 minutes per day and at least 5 days per week is recommended for the vast majority of people. Supervised exercise training is preferable to maximize function capacity, and may be particularly important for patients with autonomic disorders. PMID:24095123

  5. Development of a Commercially Viable, Modular Autonomous Robotic Systems for Converting any Vehicle to Autonomous Control

    NASA Technical Reports Server (NTRS)

    Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.

    1994-01-01

    A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.

  6. Autonomous Flight Safety System - Phase III

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.

  7. Applying neural networks in autonomous systems

    NASA Astrophysics Data System (ADS)

    Thornbrugh, Allison L.; Layne, J. D.; Wilson, James M., III

    1992-03-01

    Autonomous and teleautonomous operations have been defined in a variety of ways by different groups involved with remote robotic operations. For example, Conway describes architectures for producing intelligent actions in teleautonomous systems. Applying neural nets in such systems is similar to applying them in general. However, for autonomy, learning or learned behavior may become a significant system driver. Thus, artificial neural networks are being evaluated as components in fully autonomous and teleautonomous systems. Feed- forward networks may be trained to perform adaptive signal processing, pattern recognition, data fusion, and function approximation -- as in control subsystems. Certain components of particular autonomous systems become more amenable to implementation using a neural net due to a match between the net's attributes and desired attributes of the system component. Criteria have been developed for distinguishing such applications and then implementing them. The success of hardware implementation is a crucial part of this application evaluation process. Three basic applications of neural nets -- autoassociation, classification, and function approximation -- are used to exemplify this process and to highlight procedures that are followed during the requirements, design, and implementation phases. This paper assumes some familiarity with basic neural network terminology and concentrates upon the use of different neural network types while citing references that cover the underlying mathematics and related research.

  8. Autonomous collaborative mission systems (ACMS) for multi-UAV missions

    NASA Astrophysics Data System (ADS)

    Chen, Y.-L.; Peot, M.; Lee, J.; Sundareswaran, V.; Altshuler, T.

    2005-05-01

    UAVs are a key element of the Army"s vision for Force Transformation, and are expected to be employed in large numbers per FCS Unit of Action (UoA). This necessitates a multi-UAV level of autonomous collaboration behavior capability that meets RSTA and other mission needs of FCS UoAs. Autonomous Collaborative Mission Systems (ACMS) is a scalable architecture and behavior planning / collaborative approach to achieve this level of capability. The architecture is modular and the modules may be run in different locations/platforms to accommodate the constraints of available hardware, processing resources and mission needs. The Mission Management Module determines the role of member autonomous entities by employing collaboration mechanisms (e.g., market-based, etc.), the individual Entity Management Modules work with the Mission Manager in determining the role and task of the entity, the individual Entity Execution Modules monitor task execution and platform navigation and sensor control, and the World Model Module hosts local and global versions of the environment and the Common Operating Picture (COP). The modules and uniform interfaces provide a consistent and platform-independent baseline mission collaboration mechanism and signaling protocol across different platforms. Further, the modular design allows flexible and convenient addition of new autonomous collaborative behaviors to the ACMS through: adding new behavioral templates in the Mission Planner component, adding new components in appropriate ACMS modules to provide new mission specific functionality, adding or modifying constraints or parameters to the existing components, or any combination of these. We describe the ACMS architecture, its main features, current development status and future plans for simulations in this report.

  9. Autonomous microexplosives subsurface tracing system final report.

    SciTech Connect

    Engler, Bruce Phillip; Nogan, John; Melof, Brian Matthew; Uhl, James Eugene; Dulleck, George R., Jr.; Ingram, Brian V.; Grubelich, Mark Charles; Rivas, Raul R.; Cooper, Paul W.; Warpinski, Norman Raymond; Kravitz, Stanley H.

    2004-04-01

    The objective of the autonomous micro-explosive subsurface tracing system is to image the location and geometry of hydraulically induced fractures in subsurface petroleum reservoirs. This system is based on the insertion of a swarm of autonomous micro-explosive packages during the fracturing process, with subsequent triggering of the energetic material to create an array of micro-seismic sources that can be detected and analyzed using existing seismic receiver arrays and analysis software. The project included investigations of energetic mixtures, triggering systems, package size and shape, and seismic output. Given the current absence of any technology capable of such high resolution mapping of subsurface structures, this technology has the potential for major impact on petroleum industry, which spends approximately $1 billion dollar per year on hydraulic fracturing operations in the United States alone.

  10. Compact Through-The-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Gutow, David A.

    1992-01-01

    Changes in gas/tungsten-arc welding torch equipped with through-the-torch vision system make it smaller and more resistant to welding environment. Vision subsystem produces image of higher quality, flow of gas enhanced, and parts replaced quicker and easier. Coaxial series of lenses and optical components provides overhead view of joint and weld puddle real-time control. Designed around miniature high-resolution video camera. Smaller size enables torch to weld joints formerly inaccessible.

  11. Autonomous system for cross-country navigation

    NASA Astrophysics Data System (ADS)

    Stentz, Anthony; Brumitt, Barry L.; Coulter, R. C.; Kelly, Alonzo

    1993-05-01

    Autonomous cross-country navigation is essential for outdoor robots moving about in unstructured environments. Most existing systems use range sensors to determine the shape of the terrain, plan a trajectory that avoids obstacles, and then drive the trajectory. Performance has been limited by the range and accuracy of sensors, insufficient vehicle-terrain interaction models, and the availability of high-speed computers. As these elements improve, higher- speed navigation on rougher terrain becomes possible. We have developed a software system for autonomous navigation that provides for greater capability. The perception system supports a large braking distance by fusing multiple range images to build a map of the terrain in front of the vehicle. The system identifies range shadows and interpolates undersamples regions to account for rough terrain effects. The motion planner reduces computational complexity by investigating a minimum number of trajectories. Speeds along the trajectory are set to provide for dynamic stability. The entire system was tested in simulation, and a subset of the capability was demonstrated on a real vehicle. Results to date include a continuous 5.1 kilometer run across moderate terrain with obstacles. This paper begins with the applications, prior work, limitations, and current paradigms for autonomous cross-country navigation, and then describes our contribution to the area.

  12. Mission planning for autonomous systems

    NASA Technical Reports Server (NTRS)

    Pearson, G.

    1987-01-01

    Planning is a necessary task for intelligent, adaptive systems operating independently of human controllers. A mission planning system that performs task planning by decomposing a high-level mission objective into subtasks and synthesizing a plan for those tasks at varying levels of abstraction is discussed. Researchers use a blackboard architecture to partition the search space and direct the focus of attention of the planner. Using advanced planning techniques, they can control plan synthesis for the complex planning tasks involved in mission planning.

  13. Autonomous omnidirectional spacecraft antenna system

    NASA Technical Reports Server (NTRS)

    Taylor, T. H.

    1983-01-01

    The development of a low gain Electronically Switchable Spherical Array Antenna is discussed. This antenna provides roughly 7 dBic gain for receive/transmit operation between user satellites and the Tracking and Data Relay Satellite System. When used as a pair, the antenna provides spherical coverage. The antenna was tested in its primary operating modes: directed beam, retrodirective, and Omnidirectional.

  14. System for autonomous monitoring of bioagents

    SciTech Connect

    Langlois, Richard G.; Milanovich, Fred P.; Colston, Jr, Billy W.; Brown, Steve B.; Masquelier, Don A.; Mariella, Jr., Raymond P.; Venkateswaran, Kodomudi

    2015-06-09

    An autonomous monitoring system for monitoring for bioagents. A collector gathers the air, water, soil, or substance being monitored. A sample preparation means for preparing a sample is operatively connected to the collector. A detector for detecting the bioagents in the sample is operatively connected to the sample preparation means. One embodiment of the present invention includes confirmation means for confirming the bioagents in the sample.

  15. Autonomous Deicing System For Airplane Wing

    NASA Technical Reports Server (NTRS)

    Hickman, G. A.; Gerardi, J. J.

    1993-01-01

    Prototype autonomous deicing system for airplane includes network of electronic and electromechanical modules at various locations in wings and connected to central data-processing unit. Small, integrated solid-state device, using long coils installed under leading edge, exciting small vibrations to detect ice and larger vibrations to knock ice off. In extension of concept, outputs of vibration sensors and other sensors used to detect rivet-line fractures, fatigue cracks, and other potentially dangerous defects.

  16. A multilayer perceptron hazard detector for vision-based autonomous planetary landing

    NASA Astrophysics Data System (ADS)

    Lunghi, Paolo; Ciarambino, Marco; Lavagna, Michèle

    2016-07-01

    A hazard detection and target selection algorithm for autonomous spacecraft planetary landing, based on Artificial Neural Networks, is presented. From a single image of the landing area, acquired by a VIS camera during the descent, the system computes a hazard map, exploited to select the best target, in terms of safety, guidance constraints, and scientific interest. ANNs generalization properties allow the system to correctly operate also in conditions not explicitly considered during calibration. The net architecture design, training, verification and results are critically presented. Performances are assessed in terms of recognition accuracy and selected target safety. Results for a lunar landing scenario are discussed to highlight the effectiveness of the system.

  17. Autonomous grain combine control system

    DOEpatents

    Hoskinson, Reed L.; Kenney, Kevin L.; Lucas, James R.; Prickel, Marvin A.

    2013-06-25

    A system for controlling a grain combine having a rotor/cylinder, a sieve, a fan, a concave, a feeder, a header, an engine, and a control system. The feeder of the grain combine is engaged and the header is lowered. A separator loss target, engine load target, and a sieve loss target are selected. Grain is harvested with the lowered header passing the grain through the engaged feeder. Separator loss, sieve loss, engine load and ground speed of the grain combine are continuously monitored during the harvesting. If the monitored separator loss exceeds the selected separator loss target, the speed of the rotor/cylinder, the concave setting, the engine load target, or a combination thereof is adjusted. If the monitored sieve loss exceeds the selected sieve loss target, the speed of the fan, the size of the sieve openings, or the engine load target is adjusted.

  18. Volumetric imaging system for the ionosphere (VISION)

    NASA Astrophysics Data System (ADS)

    Dymond, Kenneth F.; Budzien, Scott A.; Nicholas, Andrew C.; Thonnard, Stefan E.; Fortna, Clyde B.

    2002-01-01

    The Volumetric Imaging System for the Ionosphere (VISION) is designed to use limb and nadir images to reconstruct the three-dimensional distribution of electrons over a 1000 km wide by 500 km high slab beneath the satellite with 10 km x 10 km x 10 km voxels. The primary goal of the VISION is to map and monitor global and mesoscale (> 10 km) electron density structures, such as the Appleton anomalies and field-aligned irregularity structures. The VISION consists of three UV limb imagers, two UV nadir imagers, a dual frequency Global Positioning System (GPS) receiver, and a coherently emitting three frequency radio beacon. The limb imagers will observe the O II 83.4 nm line (daytime electron density), O I 135.6 nm line (nighttime electron density and daytime O density), and the N2 Lyman-Birge-Hopfield (LBH) bands near 143.0 nm (daytime N2 density). The nadir imagers will observe the O I 135.6 nm line (nighttime electron density and daytime O density) and the N2 LBH bands near 143.0 nm (daytime N2 density). The GPS receiver will monitor the total electron content between the satellite containing the VISION and the GPS constellation. The three frequency radio beacon will be used with ground-based receiver chains to perform computerized radio tomography below the satellite containing the VISION. The measurements made using the two radio frequency instruments will be used to validate the VISION UV measurements.

  19. Information capacity of electronic vision systems

    NASA Astrophysics Data System (ADS)

    Taubkin, Igor I.; Trishenkov, Mikhail A.

    1996-10-01

    The comparison of various electronic-optical vision systems has been conducted based on the criterion ultimate information capacity, C, limited by fluctuations of the flux of quanta. The information capacity of daylight, night, and thermal vision systems is determined first of all by the number of picture elements, M, in the optical system. Each element, under a sufficient level of irradiation, can transfer about one byte of information for the standard frame time and so C ≈ M bytes per frame. The value of the proportionality factor, one byte per picture element, is referred to systems of daylight and thermal vision, in which a photocharge in a unit cell of the imager is limited by storage capacity, and in general it varies within a small interval of 0.5 byte per frame for night vision systems to 2 bytes per frame for ideal thermal imagers. The ultimate specific information capacity, C ∗, of electronic vision systems under low irradiation levels rises with increasing density of optical channels until the number of the irradiance gradations that can be distinguished becomes less than two in each channel. In this case, the maximum value of C ∗ turns out to be proportional to the flux of quanta coming from an object under observation. Under a high level of irradiation, C ∗ is limited by difraction effects and amounts oto 1/ λ2 bytes/cm 2 frame.

  20. A digital head-up display system as part of an integrated autonomous landing system concept

    NASA Astrophysics Data System (ADS)

    Wisely, Paul L.

    2008-04-01

    Considerable interest continues both in the aerospace industry and the military in the concept of autonomous landing guidance, and as previously reported, BAE Systems has been engaged for some time on an internally funded program to replace the high voltage power supply, tube and deflection amplifiers of its head up displays with an all digital solid state illuminated image system, based on research into the requirements for such a display as part of an integrated Enhanced Vision System. This paper describes the progress made to date in realising and testing a weather penetrating system incorporating an all digital head up display as its pilot-machine interface.

  1. Why Computer-Based Systems Should be Autonomic

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, Mike

    2005-01-01

    The objective of this paper is to discuss why computer-based systems should be autonomic, where autonomicity implies self-managing, often conceptualized in terms of being self-configuring, self-healing, self-optimizing, self-protecting and self-aware. We look at motivations for autonomicity, examine how more and more systems are exhibiting autonomic behavior, and finally look at future directions.

  2. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-03-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  3. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  4. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  5. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  6. Flight testing an integrated synthetic vision system

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream G-V aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  7. Video-image-based neural network guidance system with adaptive view-angles for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Luebbers, Paul G.; Pandya, Abhijit S.

    1991-08-01

    This paper describes the guidance function of an autonomous vehicle based on a neural network controller using video images with adaptive view angles for sensory input. The guidance function for an autonomous vehicle provides the low-level control required for maintaining the autonomous vehicle on a prescribed trajectory. Neural networks possess unique properties such as the ability to perform sensor fusion, the ability to learn, and fault tolerant architectures, qualities which are desirable for autonomous vehicle applications. To demonstrate the feasibility of using neural networks in this type of an application, an Intelledex 405 robot fitted with a video camera and vision system was used to model an autonomous vehicle with a limited range of motion. In addition to fixed-angle video images, a set of images using adaptively varied view angles based on speed are used as the input to the neural network controller. It was shown that the neural network was able to control the autonomous vehicle model along a path composed of path segments unlike the exemplars with which it was trained. This system was designed to assess only the guidance system, and it was assumed that other functions employed in autonomous vehicle control systems (mission planning, navigation, and obstacle avoidance) are to be implemented separately and are providing a desired path to the guidance system. The desired path trajectory is presented to the robot in the form of a two-dimensional path, with centerline, that is to be followed. A video camera and associated vision system provides video image data as control feedback to the guidance system. The neural network controller uses Gaussian curves for the output vector to facilitate interpolation and generalization of the output space.

  8. Seizures and brain regulatory systems: Consciousness, sleep, and autonomic systems

    PubMed Central

    Sedigh-Sarvestani, Madineh; Blumenfeld, Hal; Loddenkemper, Tobias; Bateman, Lisa M

    2014-01-01

    Research into the physiological underpinnings of epilepsy has revealed reciprocal relationships between seizures and the activity of several regulatory systems in the brain, including those governing sleep, consciousness and autonomic functions. This review highlights recent progress in understanding and utilizing the relationships between seizures and the arousal or consciousness system, the sleep-wake and associated circadian system, and the central autonomic network. PMID:25233249

  9. Autonomous Flight Safety System Road Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.; Zoemer, Roger D.; Forney, Chris S.

    2005-01-01

    On February 3, 2005, Kennedy Space Center (KSC) conducted the first Autonomous Flight Safety System (AFSS) test on a moving vehicle -- a van driven around the KSC industrial area. A subset of the Phase III design was used consisting of a single computer, GPS receiver, and UPS antenna. The description and results of this road test are described in this report.AFSS is a joint KSC and Wallops Flight Facility project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations.

  10. Agent Technology, Complex Adaptive Systems, and Autonomic Systems: Their Relationships

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Rash, James; Rouff, Chistopher; Hincheny, Mike

    2004-01-01

    To reduce the cost of future spaceflight missions and to perform new science, NASA has been investigating autonomous ground and space flight systems. These goals of cost reduction have been further complicated by nanosatellites for future science data-gathering which will have large communications delays and at times be out of contact with ground control for extended periods of time. This paper describes two prototype agent-based systems, the Lights-out Ground Operations System (LOGOS) and the Agent Concept Testbed (ACT), and their autonomic properties that were developed at NASA Goddard Space Flight Center (GSFC) to demonstrate autonomous operations of future space flight missions. The paper discusses the architecture of the two agent-based systems, operational scenarios of both, and the two systems autonomic properties.

  11. Three-Dimensional Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1989-01-01

    Stereoscopy and motion provide clues to outlines of objects. Digital image-processing system acts as "intelligent" automatic machine-vision system by processing views from stereoscopic television cameras into three-dimensional coordinates of moving object in view. Epipolar-line technique used to find corresponding points in stereoscopic views. Robotic vision system analyzes views from two television cameras to detect rigid three-dimensional objects and reconstruct numerically in terms of coordinates of corner points. Stereoscopy and effects of motion on two images complement each other in providing image-analyzing subsystem with clues to natures and locations of principal features.

  12. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  13. Sustainable and Autonomic Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter

    2006-01-01

    Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.

  14. Radiation impacts on star-tracker performance and vision systems in space

    NASA Astrophysics Data System (ADS)

    Jørgensen, John L.; Thuesen, Gøsta G.; Betto, Maurizio; Riis, Troels

    2000-03-01

    CCD-chips are widely used in spacecraft applications, due to their inherent high resolution, linearity, sensitivity and low size and power-consumption, and irrespective of their rather poor handling of ionizing radiation. One of the experiments onboard the Teamsat satellite, the payload of the prototype Ariane 502, was the Autonomous Vision System (AVS), a fully autonomous star-tracker with several advanced vision features. The main objective of the AVS was to study the autonomous operations during severe radiation flux and after appreciable total dose. The AVS experiment and the radiation experienced onboard Team-sat are described. Examples of various radiation impacts on the AVS instrument are given, and compared to ground based radiation tests.

  15. Early light vision isomorphic singular (ELVIS) system

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Ternovskiy, Igor V.; DeBacker, Theodore A.; Caulfield, H. John

    2000-07-01

    In the shallow water military scenarios, UUVs (Unmanned Underwater Vehicles) are required to protect assets against mines, swimmers, and other underwater military objects. It would be desirable if such UUVs could autonomously see in a similar way as humans, at least, at the primary visual cortex-level. In this paper, an attempt to such a UUV system development is proposed.

  16. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  17. Vision-based map building and trajectory planning to enable autonomous flight through urban environments

    NASA Astrophysics Data System (ADS)

    Watkins, Adam S.

    The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage

  18. Processing system for an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Yelton, Dennis J.; Bernier, Ken L.; Sanders-Reed, John N.

    2004-08-01

    Enhanced Vision Systems (EVS) combines imagery from multiple sensors, possibly running at different frame rates and pixel counts, on to a display. In the case of a Helmet Mounted Display (HMD), the user line of sight is continuously changing with the result that the sensor pixels rendered on the display are changing in real time. In an EVS, the various sensors provide overlapping fields of view which requires stitching imagery together to provide a seamless mosaic to the user. Further, different modality sensors may be present requiring the fusion of imagery from the sensors. All of this takes place in a dynamic flight environment where the aircraft (with fixed mounted sensors) is changing position and orientation while the users are independently changing their lines of sight. In order to provide well registered, seamless imagery, very low throughput latencies are required, while dealing with huge volumes of data. This provides both algorithmic and processing challenges which must be overcome to provide a suitable system. This paper discusses system architecture, efficient stitching and fusing algorithms, and hardware implementation issues.

  19. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  20. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  1. Verification of Autonomous Systems for Space Applications

    NASA Technical Reports Server (NTRS)

    Brat, G.; Denney, E.; Giannakopoulou, D.; Frank, J.; Jonsson, A.

    2006-01-01

    Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.

  2. Malicious Hubs: Detecting Abnormally Malicious Autonomous Systems

    SciTech Connect

    Kalafut, Andrew J.; Shue, Craig A; Gupta, Prof. Minaxi

    2010-01-01

    While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using 12 popular blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted and others account for large fractions of blacklisted IPs. Overall, we conclude that examining malicious activity at the AS granularity can unearth networks with lax security or those that harbor cybercrime.

  3. Physiology of the Autonomic Nervous System

    PubMed Central

    2007-01-01

    This manuscript discusses the physiology of the autonomic nervous system (ANS). The following topics are presented: regulation of activity; efferent pathways; sympathetic and parasympathetic divisions; neurotransmitters, their receptors and the termination of their activity; functions of the ANS; and the adrenal medullae. In addition, the application of this material to the practice of pharmacy is of special interest. Two case studies regarding insecticide poisoning and pheochromocytoma are included. The ANS and the accompanying case studies are discussed over 5 lectures and 2 recitation sections during a 2-semester course in Human Physiology. The students are in the first-professional year of the doctor of pharmacy program. PMID:17786266

  4. PET imaging of the autonomic nervous system.

    PubMed

    Thackeray, James T; Bengel, Frank M

    2016-12-01

    The autonomic nervous system is the primary extrinsic control of heart rate and contractility, and is subject to adaptive and maladaptive changes in cardiovascular disease. Consequently, noninvasive assessment of neuronal activity and function is an attractive target for molecular imaging. A myriad of targeted radiotracers have been developed over the last 25 years for imaging various components of the sympathetic and parasympathetic signal cascades. While routine clinical use remains somewhat limited, a number of larger scale studies in recent years have supplied momentum to molecular imaging of autonomic signaling. Specifically, the findings of the ADMIRE HF trial directly led to United States Food and Drug Administration approval of 123I-metaiodobenzylguanidine (MIBG) for Single Photon Emission Computed Tomography (SPECT) assessment of sympathetic neuronal innervation, and comparable results have been reported using the analogous PET agent 11C-meta-hydroxyephedrine (HED). Due to the inherent capacity for dynamic quantification and higher spatial resolution, regional analysis may be better served by PET. In addition, preliminary clinical and extensive preclinical experience has provided a broad foundation of cardiovascular applications for PET imaging of the autonomic nervous system. Recent years have witnessed the growth of novel quantification techniques, expansion of multiple tracer studies, and improved understanding of the uptake of different radiotracers, such that the transitional biology of dysfunctional subcellular catecholamine handling can be distinguished from complete denervation. As a result, sympathetic neuronal molecular imaging is poised to play a role in individualized patient care, by stratifying cardiovascular risk, visualizing underlying biology, and guiding and monitoring therapy. PMID:27611712

  5. Autonomous Underwater Vehicle Magnetic Mapping System

    NASA Astrophysics Data System (ADS)

    Steigerwalt, R.; Johnson, R. M.; Trembanis, A. C.; Schmidt, V. E.; Tait, G.

    2012-12-01

    An Autonomous Underwater Vehicle (AUV) Magnetic Mapping (MM) System has been developed and tested for military munitions detection as well as pipeline locating, wreck searches, and geologic surveys in underwater environments. The system is comprised of a high sensitivity Geometrics G-880AUV cesium vapor magnetometer integrated with a Teledyne-Gavia AUV and associated Doppler enabled inertial navigation further utilizing traditional acoustic bathymetric and side scan imaging. All onboard sensors and associated electronics are managed through customized crew members to autonomously operate through the vehicles primary control module. Total field magnetic measurements are recorded with asynchronous time-stamped data logs which include position, altitude, heading, pitch, roll, and electrical current usage. Pre-planned mission information can be uploaded to the system operators to define data collection metrics including speed, height above seafloor, and lane or transect spacing specifically designed to meet data quality objectives for the survey. As a result of the AUVs modular design, autonomous navigation and rapid deployment capabilities, the AUV MM System provides cost savings over current surface vessel surveys by reducing the mobilization/demobilization effort, thus requiring less manpower for operation and reducing or eliminating the need for a surface support vessel altogether. When the system completes its mission, data can be remotely downloaded via W-LAN and exported for use in advanced signal processing platforms. Magnetic compensation software has been concurrently developed to accept electrical current measurements directly from the AUV to address distortions from permanent and induced magnetization effects on the magnetometer. Maneuver and electrical current compensation terms can be extracted from the magnetic survey missions to perform automated post-process corrections. Considerable suppression of system noise has been observed over traditional

  6. The MAP Autonomous Mission Control System

    NASA Technical Reports Server (NTRS)

    Breed, Juile; Coyle, Steven; Blahut, Kevin; Dent, Carolyn; Shendock, Robert; Rowe, Roger

    2000-01-01

    The Microwave Anisotropy Probe (MAP) mission is the second mission in NASA's Office of Space Science low-cost, Medium-class Explorers (MIDEX) program. The Explorers Program is designed to accomplish frequent, low cost, high quality space science investigations utilizing innovative, streamlined, efficient management, design and operations approaches. The MAP spacecraft will produce an accurate full-sky map of the cosmic microwave background temperature fluctuations with high sensitivity and angular resolution. The MAP spacecraft is planned for launch in early 2001, and will be staffed by only single-shift operations. During the rest of the time the spacecraft must be operated autonomously, with personnel available only on an on-call basis. Four (4) innovations will work cooperatively to enable a significant reduction in operations costs for the MAP spacecraft. First, the use of a common ground system for Spacecraft Integration and Test (I&T) as well as Operations. Second, the use of Finite State Modeling for intelligent autonomy. Third, the integration of a graphical planning engine to drive the autonomous systems without an intermediate manual step. And fourth, the ability for distributed operations via Web and pager access.

  7. Model-based vision system for mobile robot position estimation

    NASA Astrophysics Data System (ADS)

    D'Orazio, Tiziana; Capozzo, Liborio; Ianigro, Massimo; Distante, Arcangelo

    1994-02-01

    The development of an autonomous mobile robot is a central problem in artificial intelligence and robotics. A vision system can be used to recognize naturally occurring landmarks located in known positions. The problem considered here is that of finding the location and orientation of a mobile robot using a 3-D image taken by a CCD camera located on the robot. The naturally occurring landmarks that we use are the corners of the room extracted by an edge detection algorithm from a 2-D image of the indoor scene. Then, the location and orientation of the vehicle are calculated by perspective information of the landmarks in the scene of the room where the robot moves.

  8. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  9. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  10. Prototype Optical Correlator For Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1993-01-01

    Known and unknown images fed in electronically at high speed. Optical correlator and associated electronic circuitry developed for vision system of robotic vehicle. System recognizes features of landscape by optical correlation between input image of scene viewed by video camera on robot and stored reference image. Optical configuration is Vander Lugt correlator, in which Fourier transform of scene formed in coherent light and spatially modulated by hologram of reference image to obtain correlation.

  11. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  12. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  13. A Proposal of Autonomous Robotic Systems Educative Environment

    NASA Astrophysics Data System (ADS)

    Ierache, Jorge; Garcia-Martinez, Ramón; de Giusti, Armando

    This work presents our experiences in the implementation of a laboratory of autonomous robotic systems applied to the training of beginner and advanced students doing a degree course in Computer Engineering., taking into account the specific technologies, robots, autonomous toys, and programming languages. They provide a strategic opportunity for human resources formation by involving different aspects which range from the specification elaboration, modeling, software development and implementation and testing of an autonomous robotic system.

  14. Intelligent data reduction for autonomous power systems

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1988-01-01

    Since 1984 Marshall Space Flight Center was actively engaged in research and development concerning autonomous power systems. Much of the work in this domain has dealt with the development and application of knowledge-based or expert systems to perform tasks previously accomplished only through intensive human involvement. One such task is the health status monitoring of electrical power systems. Such monitoring is a manpower intensive task which is vital to mission success. The Hubble Space Telescope testbed and its associated Nickel Cadmium Battery Expert System (NICBES) were designated as the system on which the initial proof of concept for intelligent power system monitoing will be established. The key function performed by an engineer engaged in system monitoring is to analyze the raw telemetry data and identify from the whole only those elements which can be considered significant. This function requires engineering expertise on the functionality of the system, the mode of operation and the efficient and effective reading of the telemetry data. Application of this expertise to extract the significant components of the data is referred to as data reduction. Such a function possesses characteristics which make it a prime candidate for the application of knowledge-based systems' technologies. Such applications are investigated and recommendations are offered for the development of intelligent data reduction systems.

  15. Bioinspired minimal machine multiaperture apposition vision system.

    PubMed

    Davis, John D; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2008-01-01

    Traditional machine vision systems have an inherent data bottleneck that arises because data collected in parallel must be serialized for transfer from the sensor to the processor. Furthermore, much of this data is not useful for information extraction. This project takes inspiration from the visual system of the house fly, Musca domestica, to reduce this bottleneck by employing early (up front) analog preprocessing to limit the data transfer. This is a first step toward an all analog, parallel vision system. While the current implementation has serial stages, nothing would prevent it from being fully parallel. A one-dimensional photo sensor array with analog pre-processing is used as the sole sensory input to a mobile robot. The robot's task is to chase a target car while avoiding obstacles in a constrained environment. Key advantages of this approach include passivity and the potential for very high effective "frame rates."

  16. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  17. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  18. Design of optimal correlation filters for hybrid vision systems

    NASA Technical Reports Server (NTRS)

    Rajan, Periasamy K.

    1990-01-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  19. Multitask neural network for vision machine systems

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-02-01

    A multi-task dynamic neural network that can be programmed for storing processing and encoding spatio-temporal visual information is presented in this paper. This dynamic neural network called the PNnetwork is comprised of numerous densely interconnected neural subpopulations which reside in one of the two coupled sublayers P or N. The subpopulations in the P-sublayer transmit an excitatory or a positive influence onto all interconnected units whereas the subpopulations in the N-sublayer transmit an inhibitory or negative influence. The dynamical activity generated by each subpopulation is given by a nonlinear first-order system. By varying the coupling strength between these different subpopulations it is possible to generate three distinct modes of dynamical behavior useful for performing vision related tasks. It is postulated that the PN-network can function as a basic programmable processor for novel vision machine systems. 1. 0

  20. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation

    PubMed Central

    Peli, Eli; Luo, Gang; Bowers, Alex; Rensing, Noa

    2007-01-01

    Vision loss typically affects either the wide peripheral vision (important for mobility), or central vision (important for seeing details). Traditional optical visual aids usually recover the lost visual function, but at a high cost for the remaining visual function. We have developed a novel concept of vision-multiplexing using augmented vision head-mounted display systems to address vision loss. Two applications are discussed in this paper. In the first, minified edge images from a head-mounted video camera are presented on a see-through display providing visual field expansion for people with peripheral vision loss, while still enabling the full resolution of the residual central vision to be maintained. The concept has been applied in daytime and nighttime devices. A series of studies suggested that the system could help with visual search, obstacle avoidance, and nighttime mobility. Subjects were positive in their ratings of device cosmetics and ergonomics. The second application is for people with central vision loss. Using an on-axis aligned camera and display system, central visibility is enhanced with 1:1 scale edge images, while still enabling the wide field of the unimpaired peripheral vision to be maintained. The registration error of the system was found to be low in laboratory testing. PMID:18172511

  1. Stereoscopic Vision System For Robotic Vehicle

    NASA Technical Reports Server (NTRS)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  2. Autonomous Robot System for Sensor Characterization

    SciTech Connect

    David Bruemmer; Douglas Few; Frank Carney; Miles Walton; Heather Hunting; Ron Lujan

    2004-03-01

    This paper discusses an innovative application of new Markov localization techniques that combat the problem of odometry drift, allowing a novel control architecture developed at the Idaho National Engineering and Environmental Laboratory (INEEL) to be utilized within a sensor characterization facility developed at the Remote Sensing Laboratory (RSL) in Nevada. The new robotic capability provided by the INEEL will allow RSL to test and evaluate a wide variety of sensors including radiation detection systems, machine vision systems, and sensors that can detect and track heat sources (e.g. human bodies, machines, chemical plumes). By accurately moving a target at varying speeds along designated paths, the robotic solution allows the detection abilities of a wide variety of sensors to be recorded and analyzed.

  3. The nature of the autonomic dysfunction in multiple system atrophy

    NASA Technical Reports Server (NTRS)

    Parikh, Samir M.; Diedrich, Andre; Biaggioni, Italo; Robertson, David

    2002-01-01

    The concept that multiple system atrophy (MSA, Shy-Drager syndrome) is a disorder of the autonomic nervous system is several decades old. While there has been renewed interest in the movement disorder associated with MSA, two recent consensus statements confirm the centrality of the autonomic disorder to the diagnosis. Here, we reexamine the autonomic pathophysiology in MSA. Whereas MSA is often thought of as "autonomic failure", new evidence indicates substantial persistence of functioning sympathetic and parasympathetic nerves even in clinically advanced disease. These findings help explain some of the previously poorly understood features of MSA. Recognition that MSA entails persistent, constitutive autonomic tone requires a significant revision of our concepts of its diagnosis and therapy. We will review recent evidence bearing on autonomic tone in MSA and discuss their therapeutic implications, particularly in terms of the possible development of a bionic baroreflex for better control of blood pressure.

  4. Autonomous Formations of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Dhali, Sanjana; Joshi, Suresh M.

    2013-01-01

    Autonomous formation control of multi-agent dynamic systems has a number of applications that include ground-based and aerial robots and satellite formations. For air vehicles, formation flight ("flocking") has the potential to significantly increase airspace utilization as well as fuel efficiency. This presentation addresses two main problems in multi-agent formations: optimal role assignment to minimize the total cost (e.g., combined distance traveled by all agents); and maintaining formation geometry during flock motion. The Kuhn-Munkres ("Hungarian") algorithm is used for optimal assignment, and consensus-based leader-follower type control architecture is used to maintain formation shape despite the leader s independent movements. The methods are demonstrated by animated simulations.

  5. Autonomous underwater systems for survey operations

    NASA Astrophysics Data System (ADS)

    Doelling, Norman; Gowell, Elizabeth T.

    1987-06-01

    An autonomous underwater vehicle (AUV) that can be released at sea, find a harbor, perform a task, and return to a designated location is highly desirable. The military applications of such a system are obvious. Mine clearing and mine laying come to mind. Other applications could include oceanographic surveys, mineral exploration, fish population studies, and underwater equipment repair. In 1987, the Naval Surface Weapons Center (NSWC) posed the development of such a vehicle as a research problem, and asked the NOAA Office of Sea Grant to recommend several Sea Grant Institutions with expertise in AUVs to investigate. MIT Sea Grant was invited to submit a proposal and was one of three Sea Grant Programs awarded a one-year grant by NSWC. The study developed a vehicle concept and outlined a plan of research necessary for its development. The findings of the MIT research team are summarized here.

  6. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  7. Mechanical deployment system on aries an autonomous mobile robot

    SciTech Connect

    Rocheleau, D.N.

    1995-12-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail.

  8. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  9. Contingency Software in Autonomous Systems: Technical Level Briefing

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Patterson-Hines, Ann

    2006-01-01

    Contingency management is essential to the robust operation of complex systems such as spacecraft and Unpiloted Aerial Vehicles (UAVs). Automatic contingency handling allows a faster response to unsafe scenarios with reduced human intervention on low-cost and extended missions. Results, applied to the Autonomous Rotorcraft Project and Mars Science Lab, pave the way to more resilient autonomous systems.

  10. Autonomous Control of Space Reactor Systems

    SciTech Connect

    Belle R. Upadhyaya; K. Zhao; S.R.P. Perillo; Xiaojia Xu; M.G. Na

    2007-11-30

    Autonomous and semi-autonomous control is a key element of space reactor design in order to meet the mission requirements of safety, reliability, survivability, and life expectancy. Interrestrial nuclear power plants, human operators are avilable to perform intelligent control functions that are necessary for both normal and abnormal operational conditions.

  11. Autonomous photovoltaic-diesel power system design

    NASA Astrophysics Data System (ADS)

    Calloway, T. M.

    A methodology for designing an autonomous photovoltaic power system in conjunction with a diesel-fueled electric generator and a battery has been developed. Any photovoltaic array energy not utilized immediately by the load is stored in the battery bank. The diesel generator set is operated periodically at 14-day intervals to ensure its availability and occasionally as needed during winter to supplement combined output from the array and battery. It is hypothesized that logistical support is infrequent, so the hybrid photovoltaic-diesel power system is designed to consume only 10% as much fuel as would a diesel-only system. This constraint is used to generate a set of possible combinations of array area and battery energy storage capacity. For each combination, a battery-life model predicts the time interval between battery replacements by deducting the fraction of total life consumed each day. An economic model then produces life-cycle system cost. Repeating this process for different combinations of array area and battery capacity identifies the minimum-cost system design.

  12. Digital Autonomous Terminal Access Communication (DATAC) system

    NASA Technical Reports Server (NTRS)

    Novacki, Stanley M., III

    1987-01-01

    In order to accommodate the increasing number of computerized subsystems aboard today's more fuel efficient aircraft, the Boeing Co. has developed the DATAC (Digital Autonomous Terminal Access Control) bus to minimize the need for point-to-point wiring to interconnect these various systems, thereby reducing total aircraft weight and maintaining an economical flight configuration. The DATAC bus is essentially a local area network providing interconnections for any of the flight management and control systems aboard the aircraft. The task of developing a Bus Monitor Unit was broken down into four subtasks: (1) providing a hardware interface between the DATAC bus and the Z8000-based microcomputer system to be used as the bus monitor; (2) establishing a communication link between the Z8000 system and a CP/M-based computer system; (3) generation of data reduction and display software to output data to the console device; and (4) development of a DATAC Terminal Simulator to facilitate testing of the hardware and software which transfer data between the DATAC's bus and the operator's console in a near real time environment. These tasks are briefly discussed.

  13. Systems Architecture for Fully Autonomous Space Missions

    NASA Technical Reports Server (NTRS)

    Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)

    2002-01-01

    The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software

  14. Low-power smart vision system-on-a-chip design for ultrafast machine vision applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi

    1998-03-01

    In this paper, an ultra-fast smart vision system-on-a-chip design is proposed to provide effective solutions for real time machine vision applications by taking advantages of recent advances in integrated sensing/processing designs, electronic neural networks, advanced microprocessors and sub- micron VLSI technology. The smart vision system mimics what is inherent in biological vision systems. It is programmable to perform vision processing in all levels such as image acquisition, image fusion, image analysis, and scene interpretation. A system-on-a-chip implementation of this smart vision system is shown to be feasible by integrating the whole system into a 3-cm by 3-cm chip design in a 0.18- micrometer CMOS technology. The system achieves one tea- operation-per-second computing power that is a two order-of- magnitude increase over the state-of-the-art microcomputer and DSP chips. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation. This highly integrated smart vision system can be used for various NASA scientific missions and other military, industrial or commercial vision applications.

  15. Autonomous power expert system advanced development

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Walters, Jerry L.

    1991-01-01

    The autonomous power expert (APEX) system is being developed at Lewis Research Center to function as a fault diagnosis advisor for a space power distribution test bed. APEX is a rule-based system capable of detecting faults and isolating the probable causes. APEX also has a justification facility to provide natural language explanations about conclusions reached during fault isolation. To help maintain the health of the power distribution system, additional capabilities were added to APEX. These capabilities allow detection and isolation of incipient faults and enable the expert system to recommend actions/procedure to correct the suspected fault conditions. New capabilities for incipient fault detection consist of storage and analysis of historical data and new user interface displays. After the cause of a fault is determined, appropriate recommended actions are selected by rule-based inferencing which provides corrective/extended test procedures. Color graphics displays and improved mouse-selectable menus were also added to provide a friendlier user interface. A discussion of APEX in general and a more detailed description of the incipient detection, recommended actions, and user interface developments during the last year are presented.

  16. Closed-loop autonomous docking system

    NASA Technical Reports Server (NTRS)

    Dabney, Richard W. (Inventor); Howard, Richard T. (Inventor)

    1992-01-01

    An autonomous docking system is provided which produces commands for the steering and propulsion system of a chase vehicle used in the docking of that chase vehicle with a target vehicle. The docking system comprises a passive optical target affixed to the target vehicle and comprising three reflective areas including a central area mounted on a short post, and tracking sensor and process controller apparatus carried by the chase vehicle. The latter apparatus comprises a laser diode array for illuminating the target so as to cause light to be reflected from the reflective areas of the target; a sensor for detecting the light reflected from the target and for producing an electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal and for producing, based thereon, output signals relating to the relative range, roll, pitch, yaw, azimuth, and elevation of the chase and target vehicles; and a docking process controller, responsive to the output signals produced by the signal processor, for producing command signals for controlling the steering and propulsion system of the chase vehicle.

  17. Unified formalism for higher order non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2012-03-01

    This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.

  18. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  19. Vision system testing for teleoperated vehicles

    SciTech Connect

    McGovern, D.E.; Miller, D.P.

    1989-03-01

    This study compared three forward-looking vision systems consisting of a fixed mount, black and white video camera system, a fixed mount, color video camera system and a steering-slaved color video camera system. Subjects were exposed to a variety of objects and obstacles over a marked, off-road, course while either viewing videotape or performing actual teleoperation of the vehicle. The subjects were required to detect and identify those objects which might require action while driving such as slowing down or maneuvering around the object. Subjects also estimated the same video systems as in the driving task. Two modes of driver interaction were tested: (1) actual remote driving, and (2) noninteractive video simulation. Remote driving has the advantage of realism, but is subject to variability in driving strategies and can be hazardous to equipment. Video simulation provides a more controlled environment in which to compare vision-system parameters, but at the expense of some realism. Results demonstrated that relative differences in performance among the visual systems are generally consistent in the two test modes of remote driving and simulation. A detection-range metric was found to be sensitive enough to demonstrate performance differences viewing large objects. It was also found that subjects typically overestimated distances, and when in error judging clearance, tended to overestimate the gap between the objects. 11 refs., 26 figs., 4 tabs.

  20. Active State Model for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Park, Han; Chien, Steve; Zak, Michail; James, Mark; Mackey, Ryan; Fisher, Forest

    2003-01-01

    The concept of the active state model (ASM) is an architecture for the development of advanced integrated fault-detection-and-isolation (FDI) systems for robotic land vehicles, pilotless aircraft, exploratory spacecraft, or other complex engineering systems that will be capable of autonomous operation. An FDI system based on the ASM concept would not only provide traditional diagnostic capabilities, but also integrate the FDI system under a unified framework and provide mechanism for sharing of information between FDI subsystems to fully assess the overall health of the system. The ASM concept begins with definitions borrowed from psychology, wherein a system is regarded as active when it possesses self-image, self-awareness, and an ability to make decisions itself, such that it is able to perform purposeful motions and other transitions with some degree of autonomy from the environment. For an engineering system, self-image would manifest itself as the ability to determine nominal values of sensor data by use of a mathematical model of itself, and selfawareness would manifest itself as the ability to relate sensor data to their nominal values. The ASM for such a system may start with the closed-loop control dynamics that describe the evolution of state variables. As soon as this model was supplemented with nominal values of sensor data, it would possess self-image. The ability to process the current sensor data and compare them with the nominal values would represent self-awareness. On the basis of self-image and self-awareness, the ASM provides the capability for self-identification, detection of abnormalities, and self-diagnosis.

  1. Lightweight autonomous chemical identification system (LACIS)

    NASA Astrophysics Data System (ADS)

    Lozos, George; Lin, Hai; Burch, Timothy

    2012-06-01

    Smiths Detection and Intelligent Optical Systems have developed prototypes for the Lightweight Autonomous Chemical Identification System (LACIS) for the US Department of Homeland Security. LACIS is to be a handheld detection system for Chemical Warfare Agents (CWAs) and Toxic Industrial Chemicals (TICs). LACIS is designed to have a low limit of detection and rapid response time for use by emergency responders and could allow determination of areas having dangerous concentration levels and if protective garments will be required. Procedures for protection of responders from hazardous materials incidents require the use of protective equipment until such time as the hazard can be assessed. Such accurate analysis can accelerate operations and increase effectiveness. LACIS is to be an improved point detector employing novel CBRNE detection modalities that includes a militaryproven ruggedized ion mobility spectrometer (IMS) with an array of electro-resistive sensors to extend the range of chemical threats detected in a single device. It uses a novel sensor data fusion and threat classification architecture to interpret the independent sensor responses and provide robust detection at low levels in complex backgrounds with minimal false alarms. The performance of LACIS prototypes have been characterized in independent third party laboratory tests at the Battelle Memorial Institute (BMI, Columbus, OH) and indoor and outdoor field tests at the Nevada National Security Site (NNSS). LACIS prototypes will be entering operational assessment by key government emergency response groups to determine its capabilities versus requirements.

  2. APDS: The Autonomous Pathogen Detection System

    SciTech Connect

    Hindson, B; Makarewicz, A; Setlur, U; Henderer, B; McBride, M; Dzenitis, J

    2004-10-04

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic-acid based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for seven days in a major U.S. transportation hub is reported.

  3. APDS: the autonomous pathogen detection system.

    PubMed

    Hindson, Benjamin J; Makarewicz, Anthony J; Setlur, Ujwal S; Henderer, Bruce D; McBride, Mary T; Dzenitis, John M

    2005-04-15

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic acid-based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for 7 days in a major U.S. transportation hub is reported.

  4. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    NASA Astrophysics Data System (ADS)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  5. Autonomous and Autonomic Systems: A Paradigm for Future Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walter F.; Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    NASA increasingly will rely on autonomous systems concepts, not only in the mission control centers on the ground, but also on spacecraft and on rovers and other assets on extraterrestrial bodies. Automomy enables not only reduced operations costs, But also adaptable goal-driven functionality of mission systems. Space missions lacking autonomy will be unable to achieve the full range of advanced mission objectives, given that human control under dynamic environmental conditions will not be feasible due, in part, to the unavoidably high signal propagation latency and constrained data rates of mission communications links. While autonomy cost-effectively supports accomplishment of mission goals, autonomicity supports survivability of remote mission assets, especially when human tending is not feasible. Autonomic system properties (which ensure self-configuring, self-optimizing self-healing, and self-protecting behavior) conceptually may enable space missions of a higher order into any previously flown. Analysis of two NASA agent-based systems previously prototyped, and of a proposed future mission involving numerous cooperating spacecraft, illustrates how autonomous and autonomic system concepts may be brought to bear on future space missions.

  6. Vision-aided inertial navigation system for robotic mobile mapping

    NASA Astrophysics Data System (ADS)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  7. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  8. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm. PMID:19901450

  9. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  10. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  11. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  12. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  13. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  14. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  15. Enhanced vision system for laparoscopic surgery.

    PubMed

    Tamadazte, Brahim; Fiard, Gaelle; Long, Jean-Alexandre; Cinquin, Philippe; Voros, Sandrine

    2013-01-01

    Laparoscopic surgery offers benefits to the patients but poses new challenges to the surgeons, including a limited field of view. In this paper, we present an innovative vision system that can be combined with a traditional laparoscope, and provides the surgeon with a global view of the abdominal cavity, bringing him or her closer to open surgery conditions. We present our first experiments performed on a testbench mimicking a laparoscopic setup: they demonstrate an important time gain in performing a complex task consisting bringing a thread into the field of view of the laparoscope.

  16. The Autonomous Pathogen Detection System (APDS)

    SciTech Connect

    Morris, J; Dzenitis, J

    2004-09-22

    Shaped like a mailbox on wheels, it's been called a bioterrorism ''smoke detector.'' It can be found in transportation hubs such as airports and subways, and it may be coming to a location near you. Formally known as the Autonomous Pathogen Detection System, or APDS, this latest tool in the war on bioterrorism was developed at Lawrence Livermore National Laboratory to continuously sniff the air for airborne pathogens and toxins such as anthrax or plague. The APDS is the modern day equivalent of the canaries miners took underground with them to test for deadly carbon dioxide gas. But this canary can test for numerous bacteria, viruses, and toxins simultaneously, report results every hour, and confirm positive samples and guard against false positive results by using two different tests. The fully automated system collects and prepares air samples around the clock, does the analysis, and interprets the results. It requires no servicing or human intervention for an entire week. Unlike its feathered counterpart, when an APDS unit encounters something deadly in the air, that's when it begins singing, quietly. The APDS unit transmits a silent alert and sends detailed data to public health authorities, who can order evacuation and begin treatment of anyone exposed to toxic or biological agents. It is the latest in a series of biodefense detectors developed at DOE/NNSA national laboratories. The manual predecessor to APDS, called BASIS (for Biological Aerosol Sentry and Information System), was developed jointly by Los Alamos and Lawrence Livermore national laboratories. That system was modified to become BioWatch, the Department of Homeland Security's biological urban monitoring program. A related laboratory instrument, the Handheld Advanced Nucleic Acid Analyzer (HANAA), was first tested successfully at LLNL in September 1997. Successful partnering with private industry has been a key factor in the rapid advancement and deployment of biodefense instruments such as these

  17. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  18. Online updating of synthetic vision system databases

    NASA Astrophysics Data System (ADS)

    Simard, Philippe

    In aviation, synthetic vision systems render artificial views of the world (using a database of the world and pose information) to support navigation and situational awareness in low visibility conditions. The database needs to be periodically updated to ensure its consistency with reality, since it reflects at best a nominal state of the environment. This thesis presents an approach for automatically updating the geometry of synthetic vision system databases and 3D models in general. The approach is novel in that it profits from all of the available prior information: intrinsic/extrinsic camera parameters and geometry of the world. Geometric inconsistencies (or anomalies) between the model and reality are quickly localized; this localization serves to significantly reduce the complexity of the updating problem. Given a geometric model of the world, a sample image and known camera motion, a predicted image can be generated based on a differential approach. Model locations where predictions do not match observations are assumed to be incorrect. The updating is then cast as an optimization problem where differences between observations and predictions are minimized. To cope with system uncertainties, a mechanism that automatically infers their impact on prediction validity is derived. This method not only renders the anomaly detection process robust but also prevents the overfitting of the data. The updating framework is examined at first using synthetic data and further tested in both a laboratory environment and using a helicopter in flight. Experimental results show that the algorithm is effective and robust across different operating conditions.

  19. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  20. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  1. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  2. Autonomous landing of a helicopter UAV with a ground-based multisensory fusion system

    NASA Astrophysics Data System (ADS)

    Zhou, Dianle; Zhong, Zhiwei; Zhang, Daibing; Shen, Lincheng; Yan, Chengping

    2015-02-01

    In this study, this paper focus on the vision-based autonomous helicopter unmanned aerial vehicle (UAV) landing problems. This paper proposed a multisensory fusion to autonomous landing of an UAV. The systems include an infrared camera, an Ultra-wideband radar that measure distance between UAV and Ground-Based system, an PAN-Tilt Unit (PTU). In order to identify all weather UAV targets, we use infrared cameras. To reduce the complexity of the stereovision or one-cameral calculating the target of three-dimensional coordinates, using the ultra-wideband radar distance module provides visual depth information, real-time Image-PTU tracking UAV and calculate the UAV threedimensional coordinates. Compared to the DGPS, the test results show that the paper is effectiveness and robustness.

  3. Physics Simulation Software for Autonomous Propellant Loading and Gas House Autonomous System Monitoring

    NASA Technical Reports Server (NTRS)

    Regalado Reyes, Bjorn Constant

    2015-01-01

    1. Kennedy Space Center (KSC) is developing a mobile launching system with autonomous propellant loading capabilities for liquid-fueled rockets. An autonomous system will be responsible for monitoring and controlling the storage, loading and transferring of cryogenic propellants. The Physics Simulation Software will reproduce the sensor data seen during the delivery of cryogenic fluids including valve positions, pressures, temperatures and flow rates. The simulator will provide insight into the functionality of the propellant systems and demonstrate the effects of potential faults. This will provide verification of the communications protocols and the autonomous system control. 2. The High Pressure Gas Facility (HPGF) stores and distributes hydrogen, nitrogen, helium and high pressure air. The hydrogen and nitrogen are stored in cryogenic liquid state. The cryogenic fluids pose several hazards to operators and the storage and transfer equipment. Constant monitoring of pressures, temperatures and flow rates are required in order to maintain the safety of personnel and equipment during the handling and storage of these commodities. The Gas House Autonomous System Monitoring software will be responsible for constantly observing and recording sensor data, identifying and predicting faults and relaying hazard and operational information to the operators.

  4. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA), U.S. Department... Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public.../Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held February 5-7, 2013 from 9:00...

  5. Next generation enhanced vision system processing

    NASA Astrophysics Data System (ADS)

    Bernhardt, M.; Cowell, C.; Riley, T.

    2008-04-01

    The use of multiple, high sensitivity sensors can be usefully exploited within military airborne enhanced vision systems (EVS) to provide enhanced situational awareness. To realise such benefits, the imagery from the discrete sensors must be accurately combined and enhanced prior to image presentation to the aircrew. Furthermore, great care must be taken to not introduce artefacts or false information through the image processing routines. This paper outlines developments made to a specific system that uses three collocated low light level cameras. As well as seamlessly merging the individual images, sophisticated processing techniques are used to enhance image quality as well as to remove optical and sensor artefacts such as vignetting and CCD charge smear. The techniques have been designed and tested to be robust across a wide range of scenarios and lighting conditions, and the results presented here highlight the increased performance of the new algorithms over standard EVS image processing techniques.

  6. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  7. Conducting IPN actuators for biomimetic vision system

    NASA Astrophysics Data System (ADS)

    Festin, Nicolas; Plesse, Cedric; Chevrot, Claude; Teyssié, Dominique; Pirim, Patrick; Vidal, Frederic

    2011-04-01

    In recent years, many studies on electroactive polymer (EAP) actuators have been reported. One promising technology is the elaboration of electronic conducting polymers based actuators with Interpenetrating Polymer Networks (IPNs) architecture. Their many advantageous properties as low working voltage, light weight and high lifetime (several million cycles) make them very attractive for various applications including robotics. Our laboratory recently synthesized new conducting IPN actuators based on high molecular Nitrile Butadiene Rubber, poly(ethylene oxide) derivative and poly(3,4-ethylenedioxithiophene). The presence of the elastomer greatly improves the actuator performances such as mechanical resistance and output force. In this article we present the IPN and actuator synthesis, characterizations and design allowing their integration in a biomimetic vision system.

  8. Formal Methods for Autonomic and Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Swarms of intelligent rovers and spacecraft are being considered for a number of future NASA missions. These missions will provide MSA scientist and explorers greater flexibility and the chance to gather more science than traditional single spacecraft missions. These swarms of spacecraft are intended to operate for large periods of time without contact with the Earth. To do this, they must be highly autonomous, have autonomic properties and utilize sophisticated artificial intelligence. The Autonomous Nano Technology Swarm (ANTS) mission is an example of one of the swarm type of missions NASA is considering. This mission will explore the asteroid belt using an insect colony analogy cataloging the mass, density, morphology, and chemical composition of the asteroids, including any anomalous concentrations of specific minerals. Verifying such a system would be a huge task. This paper discusses ongoing work to develop a formal method for verifying swarm and autonomic systems.

  9. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  10. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  11. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  12. Flight test comparison between enhanced vision (FLIR) and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-05-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA"s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  13. Nuclear bimodal new vision solar system missions

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision {open_quote}{open_quote}constellation{close_quote}{close_quote} type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. {copyright} {ital 1996 American Institute of Physics.}

  14. Targeting the autonomic nervous system: measuring autonomic function and novel devices for heart failure management.

    PubMed

    Patel, Hitesh C; Rosen, Stuart D; Lindsay, Alistair; Hayward, Carl; Lyon, Alexander R; di Mario, Carlo

    2013-12-10

    Neurohumoral activation, in which enhanced activity of the autonomic nervous system (ANS) is a key component, plays a pivotal role in heart failure. The neurohumoral system affects several organs and currently our knowledge of the molecular and systemic pathways involved in the neurohumoral activation is incomplete. All the methods of assessing the degree of activation of the autonomic system have limitations and they are not interchangeable. The methods considered include noradrenaline spillover, microneurography, radiotracer imaging and analysis of heart rate and blood pressure (heart rate variability, baroreceptor sensitivity, heart rate turbulence). Despite the difficulties, medications that affect the ANS have been shown to improve mortality in heart failure and the mechanism is related to attenuation of the sympathetic nervous system (SNS) and stimulation of the parasympathetic nervous system. However, limitations of compliance with medication, side effects and inadequate SNS attenuation are issues of concern with the pharmacological approach. The newer device based therapies for sympathetic modulation are showing encouraging results. As they directly influence the autonomic nervous system, more mechanistic information can be gleaned if appropriate investigations are performed at the time of the outcome trials. However, clinicians should be reminded that the ANS is an evolutionary survival mechanism and therefore there is a need to proceed with caution when trying to completely attenuate its effects. So our enthusiasm for the application of these devices in heart failure should be controlled, especially as none of the devices have trial data powered to assess effects on mortality or cardiovascular events.

  15. Development of an Automatic Identification System Autonomous Positioning System.

    PubMed

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-11-11

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts.

  16. Development of an Automatic Identification System Autonomous Positioning System.

    PubMed

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-01-01

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts. PMID:26569258

  17. Development of an Automatic Identification System Autonomous Positioning System

    PubMed Central

    Hu, Qing; Jiang, Yi; Zhang, Jingbo; Sun, Xiaowen; Zhang, Shufang

    2015-01-01

    In order to overcome the vulnerability of the global navigation satellite system (GNSS) and provide robust position, navigation and time (PNT) information in marine navigation, the autonomous positioning system based on ranging-mode Automatic Identification System (AIS) is presented in the paper. The principle of the AIS autonomous positioning system (AAPS) is investigated, including the position algorithm, the signal measurement technique, the geometric dilution of precision, the time synchronization technique and the additional secondary factor correction technique. In order to validate the proposed AAPS, a verification system has been established in the Xinghai sea region of Dalian (China). Static and dynamic positioning experiments are performed. The original function of the AIS in the AAPS is not influenced. The experimental results show that the positioning precision of the AAPS is better than 10 m in the area with good geometric dilution of precision (GDOP) by the additional secondary factor correction technology. This is the most economical solution for a land-based positioning system to complement the GNSS for the navigation safety of vessels sailing along coasts. PMID:26569258

  18. Range gated active night vision system for automobiles.

    PubMed

    David, Ofer; Kopeika, Norman S; Weizer, Boaz

    2006-10-01

    Night vision for automobiles is an emerging safety feature that is being introduced for automotive safety. We develop what we believe is an innovative new night vision system using gated imaging principles. The concept of gated imaging is described and its basic advantages, including the backscatter reduction mechanism for improved vision through fog, rain, and snow. Evaluation of performance is presented by analyzing bar pattern modulation and comparing Johnson chart predictions.

  19. Intelligent Computer Vision System for Automated Classification

    NASA Astrophysics Data System (ADS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  20. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  1. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  2. Achieving safe autonomous landings on Mars using vision-based approaches

    NASA Technical Reports Server (NTRS)

    Pien, Homer

    1992-01-01

    Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.

  3. Achieving safe autonomous landings on Mars using vision-based approaches

    NASA Astrophysics Data System (ADS)

    Pien, Homer

    1992-03-01

    Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.

  4. Role of the autonomic nervous system in modulating cardiac arrhythmias.

    PubMed

    Shen, Mark J; Zipes, Douglas P

    2014-03-14

    The autonomic nervous system plays an important role in the modulation of cardiac electrophysiology and arrhythmogenesis. Decades of research has contributed to a better understanding of the anatomy and physiology of cardiac autonomic nervous system and provided evidence supporting the relationship of autonomic tone to clinically significant arrhythmias. The mechanisms by which autonomic activation is arrhythmogenic or antiarrhythmic are complex and different for specific arrhythmias. In atrial fibrillation, simultaneous sympathetic and parasympathetic activations are the most common trigger. In contrast, in ventricular fibrillation in the setting of cardiac ischemia, sympathetic activation is proarrhythmic, whereas parasympathetic activation is antiarrhythmic. In inherited arrhythmia syndromes, sympathetic stimulation precipitates ventricular tachyarrhythmias and sudden cardiac death except in Brugada and J-wave syndromes where it can prevent them. The identification of specific autonomic triggers in different arrhythmias has brought the idea of modulating autonomic activities for both preventing and treating these arrhythmias. This has been achieved by either neural ablation or stimulation. Neural modulation as a treatment for arrhythmias has been well established in certain diseases, such as long QT syndrome. However, in most other arrhythmia diseases, it is still an emerging modality and under investigation. Recent preliminary trials have yielded encouraging results. Further larger-scale clinical studies are necessary before widespread application can be recommended.

  5. The Function of the Autonomic Nervous System during Spaceflight

    PubMed Central

    Mandsager, Kyle Timothy; Robertson, David; Diedrich, André

    2015-01-01

    Introduction Despite decades of study, a clear understanding of autonomic nervous system activity in space remains elusive. Differential interpretation of fundamental data have driven divergent theories of sympathetic activation and vasorelaxation. Methods This paper will review the available in-flight autonomic and hemodynamic data in an effort to resolve these discrepancies. The NASA NEUROLAB mission, the most comprehensive assessment of autonomic function in microgravity to date, will be highlighted. The mechanisms responsible for altered autonomic activity during spaceflight, which include the effects of hypovolemia, cardiovascular deconditioning, and altered central processing, will be presented. Results The NEUROLAB experiments demonstrated increased sympathetic activity and impairment of vagal baroreflex function during short-duration spaceflight. Subsequent non-invasive studies of autonomic function during spaceflight have largely reinforced these findings, and provide strong evidence that sympathetic activity is increased in space relative to the supine position on Earth. Others have suggested that microgravity induces a state of relative vasorelaxation and increased vagal activity when compared to upright posture on Earth. These ostensibly disparate theories are not mutually exclusive, but rather directly reflect different pre-flight postural controls. Conclusion When these results are taken together, they demonstrate that the effectual autonomic challenge of spaceflight is small, and represents an orthostatic stress less than that of upright posture on Earth. In-flight countermeasures, including aerobic and resistance exercise, as well as short-arm centrifugation have been successfully deployed to counteract these mechanisms. Despite subtle changes in autonomic activity during spaceflight, underlying neurohumoral mechanisms of the autonomic nervous system remain intact and cardiovascular function remains stable during long-duration flight. PMID:25820827

  6. Improving Human/Autonomous System Teaming Through Linguistic Analysis

    NASA Technical Reports Server (NTRS)

    Meszaros, Erica L.

    2016-01-01

    An area of increasing interest for the next generation of aircraft is autonomy and the integration of increasingly autonomous systems into the national airspace. Such integration requires humans to work closely with autonomous systems, forming human and autonomous agent teams. The intention behind such teaming is that a team composed of both humans and autonomous agents will operate better than homogenous teams. Procedures exist for licensing pilots to operate in the national airspace system and current work is being done to define methods for validating the function of autonomous systems, however there is no method in place for assessing the interaction of these two disparate systems. Moreover, currently these systems are operated primarily by subject matter experts, limiting their use and the benefits of such teams. Providing additional information about the ongoing mission to the operator can lead to increased usability and allow for operation by non-experts. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables the prediction of the operator's intent and allows the interface to supply the operator's desired information.

  7. A design strategy for autonomous systems

    NASA Technical Reports Server (NTRS)

    Forster, Pete

    1989-01-01

    Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.

  8. Advances in Autonomous Systems for Missions of Space Exploration

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Briggs, G. A.; Hieronymus, J.; Clancy, D. J.

    New missions of space exploration will require unprecedented levels of autonomy to successfully accomplish their objectives. Both inherent complexity and communication distances will preclude levels of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of meeting the greatly increased space exploration requirements, along with dramatically reduced design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health monitoring and maintenance capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of space exploration, since the science and operational requirements specified by such missions, as well as the budgetary constraints that limit the ability to monitor and control these missions by a standing army of ground- based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communications distance as are not otherwise possible, as well as many more efficient and low cost

  9. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    NASA Technical Reports Server (NTRS)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  10. Autonomous rendezvous and feature detection system using TV imagery

    NASA Technical Reports Server (NTRS)

    Rice, R. B., Jr.

    1977-01-01

    Algorithms and equations are used for conversion of standard television imaging system information into directly usable spatial and dimensional information. System allows utilization of spacecraft imagery system as sensor in application to operations such as deriving spacecraft steering signal, tracking, autonomous rendezvous and docking and ranging.

  11. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  12. Systems, methods and apparatus for quiesence of autonomic systems with self action

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided in which an autonomic unit or element is quiesced. A quiesce component of an autonomic unit can cause the autonomic unit to self-destruct if a stay-alive reprieve signal is not received after a predetermined time.

  13. Vision-based real-time obstacle detection and tracking for autonomous vehicle guidance

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Yu, Qian; Wang, Hong; Zhang, Bo

    2002-03-01

    The ability of obstacles detection and tracking is essential for safe visual guidance of autonomous vehicles, especially in urban environments. In this paper, we first overview different plane projective transformation (PPT) based obstacle detection approaches under the planar ground assumption. Then, we give a simple proof of this approach with relative affine, a unified framework that includes the Euclidean, projective and affine frameworks by generalization and specialization. Next, we present a real-time hybrid obstacle detection method, which combined the PPT based method with the region segmentation based method to provide more accurate locations of obstacles. At last, with the vehicle's position information, a Kalman Filter is applied to track obstacles from frame to frame. This method has been tested on THMR-V (Tsinghua Mobile Robot V). Through various experiments we successfully demonstrate its real-time performance, high accuracy, and high robustness.

  14. Autonomous navigation system based on GPS and magnetometer data

    NASA Technical Reports Server (NTRS)

    Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)

    2004-01-01

    This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.

  15. Guidance, navigation and control system for autonomous proximity operations and docking of spacecraft

    NASA Astrophysics Data System (ADS)

    Lee, Daero

    This study develops an integrated guidance, navigation and control system for use in autonomous proximity operations and docking of spacecraft. A new approach strategy is proposed based on a modified system developed for use with the International Space Station. It is composed of three "V-bar hops" in the closing transfer phase, two periods of stationkeeping and a "straight line V-bar" approach to the docking port. Guidance, navigation and control functions are independently designed and are then integrated in the form of linear Gaussian-type control. The translational maneuvers are determined through the integration of the state-dependent Riccati equation control formulated using the nonlinear relative motion dynamics with the weight matrices adjusted at the steady state condition. The reference state is provided by a guidance function, and the relative navigation is performed using a rendezvous laser vision system and a vision sensor system, where a sensor mode change is made along the approach in order to provide effective navigation. The rotational maneuvers are determined through a linear quadratic Gaussian-type control using star trackers and gyros, and a vision sensor. The attitude estimation mode change is made from absolute estimation to relative attitude estimation during the stationkeeping phase inside the approach corridor. The rotational controller provides the precise attitude control using weight matrices adjusted at the steady state condition, including the uncertainty of the moment of inertia and external disturbance torques. A six degree-of-freedom simulation demonstrates that the newly developed GNC system successfully autonomously performs proximity operations and meets the conditions for entering the final docking phase.

  16. Autonomous Dispersed Control System for Independent Micro Grid

    NASA Astrophysics Data System (ADS)

    Kawasaki, Kensuke; Matsumura, Shigenori; Iwabu, Koichi; Fujimura, Naoto; Iima, Takahito

    In this paper, we show an autonomous dispersed control system for independent micro grid of which performance has been substantiated in China by Shikoku Electric Power Co. and its subsidiary companies under the trust of NEDO (New Energy and Industrial Technology Development Organization). For the control of grid interconnected generators, the exclusive information line is very important to save fuel cost and maintain high frequency quality on electric power supply, but it is relatively expensive in such small micro grid. We contrived an autonomous dispersed control system without any exclusive information line for dispatching control and adjusting supply control. We have confirmed through the substantiation project in China that this autonomous dispersed control system for independent micro grid has a well satisfactory characteristic from the view point of less fuel consumption and high electric quality.

  17. High Speed Research - External Vision System (EVS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Imagine flying a supersonic passenger jet (like the Concorde) at 1500 mph with no front windows in the cockpit - it may one day be a reality, as seen in this animation still. NASA engineers are working to develop technology that would replace the forward cockpit windows in future supersonic passenger jets with large sensor displays. These displays would use video images, enhanced by computer-generated graphics, to take the place of the view out the front windows. The envisioned eXternal Visibility System (XVS) would guide pilots to an airport, warn them of other aircraft near their path, and provide additional visual aides for airport approaches, landings and takeoffs. Currently, supersonic transports like the Anglo-French Concorde droop the front of the jet (the 'nose') downward to allow the pilots to see forward during takeoffs and landings. By enhancing the pilots' vision with high-resolution video displays, future supersonic transport designers could eliminate the heavy and expensive, mechanically-drooped nose. A future U.S. supersonic passenger jet, as envisioned by NASA's High-Speed Research (HSR) program, would carry 300 passengers more than 5000 nautical miles per hour more than 1500 miles per hour (more than twice the speed of sound). Traveling from Los Angeles to Tokyo would take only four hours, with an anticipated fare increase of only 20 percent over current ticket prices for substantially slower subsonic flights. Animation by Joey Ponthieux, Computer Sciences Corporation, Inc.

  18. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  19. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  20. Advances in Autonomous Systems for Missions of Space Exploration

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Briggs, G. A.; Hieronymus, J.; Clancy, D. J.

    New missions of space exploration will require unprecedented levels of autonomy to successfully accomplish their objectives. Both inherent complexity and communication distances will preclude levels of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of meeting the greatly increased space exploration requirements, along with dramatically reduced design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health monitoring and maintenance capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of space exploration, since the science and operational requirements specified by such missions, as well as the budgetary constraints that limit the ability to monitor and control these missions by a standing army of ground- based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communications distance as are not otherwise possible, as well as many more efficient and low cost

  1. Faddeev-Jackiw quantization of non-autonomous singular systems

    NASA Astrophysics Data System (ADS)

    Belhadi, Zahir; Bérard, Alain; Mohrbach, Hervé

    2016-10-01

    We extend the quantization à la Faddeev-Jackiw for non-autonomous singular systems. This leads to a generalization of the Schrödinger equation for those systems. The method is exemplified by the quantization of the damped harmonic oscillator and the relativistic particle in an external electromagnetic field.

  2. Space station automation study: Autonomous systems and assembly, volume 2

    NASA Technical Reports Server (NTRS)

    Bradford, K. Z.

    1984-01-01

    This final report, prepared by Martin Marietta Denver Aerospace, provides the technical results of their input to the Space Station Automation Study, the purpose of which is to develop informed technical guidance in the use of autonomous systems to implement space station functions, many of which can be programmed in advance and are well suited for automated systems.

  3. Autonomous satellite navigation with the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Fuchs, A. J.; Wooden, W. H., II; Long, A. C.

    1977-01-01

    This paper discusses the potential of using the Global Positioning System (GPS) to provide autonomous navigation capability to NASA satellites in the 1980 era. Some of the driving forces motivating autonomous navigation are presented. These include such factors as advances in attitude control systems, onboard science annotation, and onboard gridding of imaging data. Simulation results which demonstrate baseline orbit determination accuracies using GPS data on Seasat, Landsat-D, and the Solar Maximum Mission are presented. Emphasis is placed on identifying error sources such as GPS time, GPS ephemeris, user timing biases, and user orbit dynamics, and in a parametric sense on evaluating their contribution to the orbit determination accuracies.

  4. Autonomous Control and Diagnostics of Space Reactor Systems

    SciTech Connect

    Upadhyaya, B.R.; Xu, X.; Perillo, S.R.P.; Na, M.G.

    2006-07-01

    This paper describes three key features of the development of an autonomous control strategy for space reactor systems. These include the development of a reactor simulation model for transient analysis, development of model-predictive control as part of the autonomous control strategy, and a fault detection and isolation module. The latter is interfaced with the control supervisor as part of a hierarchical control system. The approach has been applied to the nodal model of the SP-100 reactor with a thermo-electric generator. The results of application demonstrate the effectiveness of the control approach and its ability to reconfigure the control mode under fault conditions. (authors)

  5. Turning a remotely controllable observatory into a fully autonomous system

    NASA Astrophysics Data System (ADS)

    Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael

    2014-08-01

    We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.

  6. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features. Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths. As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone

  7. Implicit numerical integration for periodic solutions of autonomous nonlinear systems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1982-01-01

    A change of variables that stabilizes numerical computations for periodic solutions of autonomous systems is derived. Computation of the period is decoupled from the rest of the problem for conservative systems of any order and for any second-order system. Numerical results are included for a second-order conservative system under a suddenly applied constant load. Near the critical load for the system, a small increment in load amplitude results in a large increase in amplitude of the response.

  8. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  9. Synthetic vision as an integrated element of an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Jennings, Chad W.; Alter, Keith W.; Barrows, Andrew K.; Bernier, Ken L.; Guell, Jeff J.

    2002-07-01

    Enhanced Vision Systems (EVS) and Synthetic Vision Systems (SVS) have the potential to allow vehicle operators to benefit from the best that various image sources have to offer. The ability to see in all directions, even in reduced visibility conditions, offers considerable benefits for operational effectiveness and safety. Nav3D and The Boeing Company are conducting development work on an Enhanced Vision System with an integrated Synthetic Vision System. The EVS consists of several imaging sensors that are digitally fused together to give a pilot a better view of the outside world even in challenging visual conditions. The EVS is limited however to provide imagery within the viewing frustum of the imaging sensors. The SVS can provide a rendered image of an a priori database in any direction that the pilot chooses to look and thus can provide information of terrain and flight path that are outside the purview of the EVS. Design concepts of the system will be discussed. In addition the ground and flight testing of the system will be described.

  10. Using Multimodal Input for Autonomous Decision Making for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Neilan, James H.; Cross, Charles; Rothhaar, Paul; Tran, Loc; Motter, Mark; Qualls, Garry; Trujillo, Anna; Allen, B. Danette

    2016-01-01

    Autonomous decision making in the presence of uncertainly is a deeply studied problem space particularly in the area of autonomous systems operations for land, air, sea, and space vehicles. Various techniques ranging from single algorithm solutions to complex ensemble classifier systems have been utilized in a research context in solving mission critical flight decisions. Realized systems on actual autonomous hardware, however, is a difficult systems integration problem, constituting a majority of applied robotics development timelines. The ability to reliably and repeatedly classify objects during a vehicles mission execution is vital for the vehicle to mitigate both static and dynamic environmental concerns such that the mission may be completed successfully and have the vehicle operate and return safely. In this paper, the Autonomy Incubator proposes and discusses an ensemble learning and recognition system planned for our autonomous framework, AEON, in selected domains, which fuse decision criteria, using prior experience on both the individual classifier layer and the ensemble layer to mitigate environmental uncertainty during operation.

  11. Expert system isssues in automated, autonomous space vehicle rendezvous

    NASA Technical Reports Server (NTRS)

    Goodwin, Mary Ann; Bochsler, Daniel C.

    1987-01-01

    The problems involved in automated autonomous rendezvous are briefly reviewed, and the Rendezvous Expert (RENEX) expert system is discussed with reference to its goals, approach used, and knowledge structure and contents. RENEX has been developed to support streamlining operations for the Space Shuttle and Space Station program and to aid definition of mission requirements for the autonomous portions of rendezvous for the Mars Surface Sample Return and Comet Nucleus Sample return unmanned missions. The experience with REMEX to date and recommendations for further development are presented.

  12. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  13. Autonomous Frequency-Domain System-Identification Program

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Mettler, Edward; Bayard, David S.; Hadaegh, Fred Y.; Milman, Mark H.; Scheid, Robert E.

    1993-01-01

    Autonomous Frequency Domain Identification (AU-FREDI) computer program implements system of methods, algorithms, and software developed for identification of parameters of mathematical models of dynamics of flexible structures and characterization, by use of system transfer functions, of such models, dynamics, and structures regarded as systems. Software considered collection of routines modified and reassembled to suit system-identification and control experiments on large flexible structures.

  14. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  15. Building Artificial Vision Systems with Machine Learning

    SciTech Connect

    LeCun, Yann

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  16. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  17. Random attractor of non-autonomous stochastic Boussinesq lattice system

    SciTech Connect

    Zhao, Min Zhou, Shengfan

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  18. Glaucoma and concomitant status of autonomic nervous system.

    PubMed

    Kumar, R; Ahuja, V M

    1998-01-01

    There is much clinical evidence to suggest that certain types of Glaucoma are related to activity of autonomic nervous system (ANS). Although some local changes have been documented but systemic association has not been established, so far. Hence, the present study was initiated and an attempt was made to bring out the association of systemic autonomic functions with glaucoma (especially Primary Closed Angle Glaucoma (PCAG)) if any. This study was carried out in the Department of Physiology, Maulana Azad Medical College in association with Glaucoma Clinic of Guru Nanak Eye Centre, New Delhi from June 1993-August 94. ANS function tests were conducted using Polyrite-8-Medicare System. The subjects were confirmed cases of PCAG with 10P-22.1 +/- 4.4 mmHg and possibility of autonomic neuropathy due to any other cause was ruled out. They were matched with normal subjects for their age, anthropometry and were compared for their sympathetic activity of ANS by Galvanic Skin Resistance (GSR); Cold Pressor Response (CPR); corrected QT interval (QTc) and T-wave amplitude (TWA) and for parasympathetic activity of ANS by Resting Heart Rate (RHR); Standing to Lying Ratio (SLR) and Valsalva Ratio and analysed statistically using standard 't' test. The results obtained in this study indicated increase in sympathetic activity in 61% of PCAG subjects and decreased parasympathic activity in 80% of the PCAG subjects when compared with control group of subjects, suggesting association of autonomic neuropathy with PCAG.

  19. Correlation functions of an autonomous stochastic system with time delays

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Mei, Dong Cheng

    2014-03-01

    The auto-correlation function and the cross-correlation function of an autonomous stochastic system with time delays are investigated. We obtain the distribution curves of the auto-correlation function Cx(s) and Cy(s), and the cross-correlation function C(s) and C(s) of the stochastic dynamic variables by the stochastic simulation method. The delay time changes prominently the behaviors of the dynamical variables of an autonomous stochastic system, which makes the auto-correlation and the cross-correlation of the autonomous stochastic system alternate oscillate periodically from positive to negative, or from negative to positive, decrease gradually, and finally tends to zero with the decay time. The delay time and the noise strength have important impacts for the auto-correlation and the cross-correlation of the autonomous stochastic delay system. The delay time enhances the auto-correlation and the cross-correlation, on the contrary, the noise strength lowers the auto-correlation and the cross-correlation. Under the time delay, by comparison we further show differences of the auto-correlation and the cross-correlation between the dynamical variables x and y.

  20. Is There Anything "Autonomous" in the Nervous System?

    ERIC Educational Resources Information Center

    Rasia-Filho, Alberto A.

    2006-01-01

    The terms "autonomous" or "vegetative" are currently used to identify one part of the nervous system composed of sympathetic, parasympathetic, and gastrointestinal divisions. However, the concepts that are under the literal meaning of these words can lead to misconceptions about the actual nervous organization. Some clear-cut examples indicate…

  1. Human Factors And Safety Considerations Of Night Vision Systems Flight

    NASA Astrophysics Data System (ADS)

    Verona, Robert W.; Rash, Clarence E.

    1989-03-01

    Military aviation night vision systems greatly enhance the capability to operate during periods of low illumination. After flying with night vision devices, most aviators are apprehensive about returning to unaided night flight. Current night vision imaging devices allow aviators to fly during ambient light conditions which would be extremely dangerous, if not impossible, with unaided vision. However, the visual input afforded with these devices does not approach that experienced using the unencumbered, unaided eye during periods of daylight illumination. Many visual parameters, e,g., acuity, field-of-view, depth perception, etc., are compromised when night vision devices are used. The inherent characteristics of image intensification based sensors introduce new problems associated with the interpretation of visual information based on different spatial and spectral content from that of unaided vision. In addition, the mounting of these devices onto the helmet is accompanied by concerns of fatigue resulting from increased head supported weight and shift in center-of-gravity. All of these concerns have produced numerous human factors and safety issues relating to thb use of night vision systems. These issues are identified and discussed in terms of their possible effects on user performance and safety.

  2. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  3. Power supply of autonomous systems using solar modules

    NASA Astrophysics Data System (ADS)

    Yurchenko, A. V.; Zotov, L. G.; Mekhtiev, A. D.; Yugai, V. V.; Tatkeeva, G. G.

    2015-04-01

    The article shows the methods of constructing autonomous decentralized energy systems from solar modules. It shows the operation of up DC inverter. It demonstrates the effectiveness of DC inverters with varying structure. The system has high efficiency and low level of conductive impulse noise and at the same time the system is practically feasible. Electrical processes have been analyzed to determine the characteristics of operating modes of the main circuit elements. Recommendations on using the converters have been given.

  4. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  5. An autonomous rendezvous and docking system using cruise missile technologies

    NASA Technical Reports Server (NTRS)

    Jones, Ruel Edwin

    1991-01-01

    In November 1990 the Autonomous Rendezvous & Docking (AR&D) system was first demonstrated for members of NASA's Strategic Avionics Technology Working Group. This simulation utilized prototype hardware from the Cruise Missile and Advanced Centaur Avionics systems. The object was to show that all the accuracy, reliability and operational requirements established for a space craft to dock with Space Station Freedom could be met by the proposed system. The rapid prototyping capabilities of the Advanced Avionics Systems Development Laboratory were used to evaluate the proposed system in a real time, hardware in the loop simulation of the rendezvous and docking reference mission. The simulation permits manual, supervised automatic and fully autonomous operations to be evaluated. It is also being upgraded to be able to test an Autonomous Approach and Landing (AA&L) system. The AA&L and AR&D systems are very similar. Both use inertial guidance and control systems supplemented by GPS. Both use an Image Processing System (IPS), for target recognition and tracking. The IPS includes a general purpose multiprocessor computer and a selected suite of sensors that will provide the required relative position and orientation data. Graphic displays can also be generated by the computer, providing the astronaut / operator with real-time guidance and navigation data with enhanced video or sensor imagery.

  6. Blackboard architectures and their relationship to autonomous space systems

    NASA Technical Reports Server (NTRS)

    Thornbrugh, Allison

    1988-01-01

    The blackboard architecture provides a powerful paradigm for the autonomy expected in future spaceborne systems, especially SDI and Space Station. Autonomous systems will require skill in both the classic task of information analysis and the newer tasks of decision making, planning and system control. Successful blackboard systems have been built to deal with each of these tasks separately. The blackboard paradigm achieves success in difficult domains through its ability to integrate several uncertain sources of knowledge. In addition to flexible behavior during autonomous operation, the system must also be capable of incrementally growing from semiautonomy to full autonomy. The blackboard structure allows this development. The blackboard's ability to handle error, its flexible execution, and variants of this paradigm are discussed as they apply to specific problems of the space environment.

  7. Three-dimensional imaging system combining vision and ultrasonics

    NASA Astrophysics Data System (ADS)

    Wykes, Catherine; Chou, Tsung N.

    1994-11-01

    Vision systems are being applied to a wide range of inspection problems in manufacturing. In 2D systems, a single video camera captures an image of the object and application of suitable image processing techniques enables information about dimension, shape and the presence of features and flaws to be extracted from the image. This can be used to recognize, inspect and/or measure the part. 3D measurement is also possible with vision systems but requires the use of either two or more cameras, or structured lighting (i.e. stripes or grids) and the processing of such images is necessarily considerably more complex, and therefore slower and more expensive than 3D imaging. Ultrasonic imaging is widely used in medical and NDT applications to give 3D images; in these systems, the ultrasound is propagated into a liquid or a solid. Imaging using air-borne ultrasound is much less advanced, mainly due to the limited availability of suitable sensors. Unique 2D ultrasonic ranging systems using in-house built phased arrays have been developed in Nottingham which enable both the range and bearing of targets to be measured. The ultrasonic/vision system will combine the excellent lateral resolution of a vision system with the straightforward range acquisition of the ultrasonic system. The system is expected to extend the use of vision systems in automation, particularly in the area of automated assembly where it can eliminate the need for expensive jigs and orienting part-feeders.

  8. Immune systems are not just for making you feel better: they are for controlling autonomous robots

    NASA Astrophysics Data System (ADS)

    Rosenblum, Mark

    2005-05-01

    The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.

  9. Thinking Ahead: Autonomic Buildings

    SciTech Connect

    Brambley, Michael R. )

    2002-08-31

    The time has come for the commercial buildings industries to reconsider the very nature of the systems installed in facilities today and to establish a vision for future buildings that differs from anything in the history of human shelter. Drivers for this examination include reductions in building operation staffs; uncertain costs and reliability of electric power; growing interest in energy-efficient and resource-conserving?green? and?high-performance? commercial buildings; and a dramatic increase in security concerns since the tragic events of September 11. This paper introduces a new paradigm? autonomic buildings? which parallels the concept of autonomic computing, introduced by IBM as a fundamental change in the way computer networks work. Modeled after the human nervous system,?autonomic systems? themselves take responsibility for a large portion of their own operation and even maintenance. For commercial buildings, autonomic systems could provide environments that afford occupants greater opportunity to focus on the things we do in buildings rather than on operation of the building itself, while achieving higher performance levels, increased security, and better use of energy and other natural resources. The author uses the human body and computer networking to introduce and illustrate this new paradigm for high-performance commercial buildings. He provides a vision for the future of commercial buildings based on autonomicity, identifies current research that could contribute to this future, and highlights research and technological gaps. The paper concludes with a set of issues and needs that are key to converting this idealized future into reality.

  10. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  11. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)

    1996-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  12. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard (Inventor)

    1994-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  13. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Astrophysics Data System (ADS)

    Howard, Richard

    1994-08-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  14. Scheduling lessons learned from the Autonomous Power System

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  15. Optical 3D laser measurement system for navigation of autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Básaca-Preciado, Luis C.; Sergiyenko, Oleg Yu.; Rodríguez-Quinonez, Julio C.; García, Xochitl; Tyrsa, Vera V.; Rivas-Lopez, Moises; Hernandez-Balbuena, Daniel; Mercorelli, Paolo; Podrygalo, Mikhail; Gurko, Alexander; Tabakova, Irina; Starostenko, Oleg

    2014-03-01

    In our current research, we are developing a practical autonomous mobile robot navigation system which is capable of performing obstacle avoiding task on an unknown environment. Therefore, in this paper, we propose a robot navigation system which works using a high accuracy localization scheme by dynamic triangulation. Our two main ideas are (1) integration of two principal systems, 3D laser scanning technical vision system (TVS) and mobile robot (MR) navigation system. (2) Novel MR navigation scheme, which allows benefiting from all advantages of precise triangulation localization of the obstacles, mostly over known camera oriented vision systems. For practical use, mobile robots are required to continue their tasks with safety and high accuracy on temporary occlusion condition. Presented in this work, prototype II of TVS is significantly improved over prototype I of our previous publications in the aspects of laser rays alignment, parasitic torque decrease and friction reduction of moving parts. The kinematic model of the MR used in this work is designed considering the optimal data acquisition from the TVS with the main goal of obtaining in real time, the necessary values for the kinematic model of the MR immediately during the calculation of obstacles based on the TVS data.

  16. The organization of an autonomous learning system

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    The organization of systems that learn from experience is examined, human beings and animals being prime examples of such systems. How is their information processing organized. They build an internal model of the world and base their actions on the model. The model is dynamic and predictive, and it includes the systems' own actions and their effects. In modeling such systems, a large pattern of features represents a moment of the system's experience. Some of the features are provided by the system's senses, some control the system's motors, and the rest have no immediate external significance. A sequence of such patterns then represents the system's experience over time. By storing such sequences appropriately in memory, the system builds a world model based on experience. In addition to the essential function of memory, fundamental roles are played by a sensory system that makes raw information about the world suitable for memory storage and by a motor system that affects the world. The relation of sensory and motor systems to the memory is discussed, together with how favorable actions can be learned and unfavorable actions can be avoided. Results in classical learning theory are explained in terms of the model, more advanced forms of learning are discussed, and the relevance of the model to the frame problem of robotics is examined.

  17. Forward residue harmonic balance for autonomous and non-autonomous systems with fractional derivative damping

    NASA Astrophysics Data System (ADS)

    Leung, A. Y. T.; Guo, Zhongjin

    2011-04-01

    Both the autonomous and non-autonomous systems with fractional derivative damping are investigated by the harmonic balance method in which the residue resulting from the truncated Fourier series is reduced iteratively. The first approximation using a few Fourier terms is obtained by solving a set of nonlinear algebraic equations. The unbalanced residues due to Fourier truncation are considered iteratively by solving linear algebraic equations to improve the accuracy and increase the number of Fourier terms of the solutions successively. Multiple solutions, representing the occurrences of jump phenomena, supercritical pitchfork bifurcation and symmetry breaking phenomena are predicted analytically. The interactions of the excitation frequency, the fractional order, amplitude, phase angle and the frequency amplitude response are examined. The forward residue harmonic balance method is presented to obtain the analytical approximations to the angular frequency and limit cycle for fractional order van der Pol oscillator. Numerical results reveal that the method is very effective for obtaining approximate solutions of nonlinear systems having fractional order derivatives.

  18. Large autonomous spacecraft electrical power system (LASEPS)

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Johnson, Yvette B.

    1992-01-01

    NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed.

  19. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  20. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    SciTech Connect

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  1. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms. PMID:19203859

  2. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.

  3. Challenges of Embedded Computer Vision in Automotive Safety Systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Dhua, Arnab S.; Kiselewich, Stephen J.; Bauson, William A.

    Vision-based automotive safety systems have received considerable attention over the past decade. Such systems have advantages compared to those based on other types of sensors such as radar, because of the availability of lowcost and high-resolution cameras and abundant information contained in video images. However, various technical challenges exist in such systems. One of the most prominent challenges lies in running sophisticated computer vision algorithms on low-cost embedded systems at frame rate. This chapter discusses these challenges through vehicle detection and classification in a collision warning system.

  4. Science requirements for PRoViScout, a robotics vision system for planetary exploration

    NASA Astrophysics Data System (ADS)

    Hauber, E.; Pullan, D.; Griffiths, A.; Paar, G.

    2011-10-01

    The robotic exploration of planetary surfaces, including missions of interest for geobiology (e.g., ExoMars), will be the precursor of human missions within the next few decades. Such exploration will require platforms which are much more self-reliant and capable of exploring long distances with limited ground support in order to advance planetary science objectives in a timely manner. The key to this objective is the development of planetary robotic onboard vision processing systems, which will enable the autonomous on-site selection of scientific and mission-strategic targets, and the access thereto. The EU-funded research project PRoViScout (Planetary Robotics Vision Scout) is designed to develop a unified and generic approach for robotic vision onboard processing, namely the combination of navigation and scientific target selection. Any such system needs to be "trained", i.e. it needs (a) scientific requirements which the system needs to address, and (b) a data base of scientifically representative target scenarios which can be analysed. We present our preliminary list of science requirements, based on previous experience from landed Mars missions.

  5. Potential Autonomic Nervous System Effects of Statins in Heart Failure

    PubMed Central

    Horwich, Tamara; Middlekauff, Holly

    2008-01-01

    Synopsis Sympathetic nervous system activation in heart failure, as indexed by elevated norepinephrine levels, higher muscle sympathetic nerve activity and reduced heart rate variability, is associated with pathologic ventricular remodeling, increased arrhythmias, sudden death, and increased mortality. Recent evidence suggests that HMG-CoA reductase inhibitor (statin) therapy may provide survival benefit in heart failure of both ischemic and non-ischemic etiology, and one potential mechanism of benefit of statins in heart failure is modulation of the autonomic nervous system. Animal models of heart failure demonstrate reduced sympathetic activation and improved sympathovagal balance with statin therapy. Initial human studies have reported mixed results. Ongoing translational studies and outcomes trials will help delineate the potentially beneficial effects of statins on the autonomic nervous system in heart failure. PMID:18433696

  6. Multi-agent autonomous system and method

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A method of controlling a plurality of crafts in an operational area includes providing a command system, a first craft in the operational area coupled to the command system, and a second craft in the operational area coupled to the command system. The method further includes determining a first desired destination and a first trajectory to the first desired destination, sending a first command from the command system to the first craft to move a first distance along the first trajectory, and moving the first craft according to the first command. A second desired destination and a second trajectory to the second desired destination are determined and a second command is sent from the command system to the second craft to move a second distance along the second trajectory.

  7. TVS: An Environment For Building Knowledge-Based Vision Systems

    NASA Astrophysics Data System (ADS)

    Weymouth, Terry E.; Amini, Amir A.; Tehrani, Saeid

    1989-03-01

    Advances in the field of knowledge-guided computer vision require the development of large scale projects and experimentation with them. One factor which impedes such development is the lack of software environments which combine standard image processing and graphics abilities with the ability to perform symbolic processing. In this paper, we describe a software environment that assists in the development of knowledge-based computer vision projects. We have built, upon Common LISP and C, a software development environment which combines standard image processing tools and a standard blackboard-based system, with the flexibility of the LISP programming environment. This environment has been used to develop research projects in knowledge-based computer vision and dynamic vision for robot navigation.

  8. A Test-Bed Configuration: Toward an Autonomous System

    NASA Astrophysics Data System (ADS)

    Ocaña, F.; Castillo, M.; Uranga, E.; Ponz, J. D.; TBT Consortium

    2015-09-01

    In the context of the Space Situational Awareness (SSA) program of ESA, it is foreseen to deploy several large robotic telescopes in remote locations to provide surveillance and tracking services for man-made as well as natural near-Earth objects (NEOs). The present project, termed Telescope Test Bed (TBT) is being developed under ESA's General Studies and Technology Programme, and shall implement a test-bed for the validation of an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor NEO services. In order to fulfill all the security requirements for the TBT project, the use of a autonomous emergency system (AES) is foreseen to monitor the control system. The AES will monitor remotely the health of the observing system and the internal and external environment. It will incorporate both autonomous and interactive actuators to force the protection of the system (i.e., emergency dome close out).

  9. Multiple-channel Streaming Delivery for Omnidirectional Vision System

    NASA Astrophysics Data System (ADS)

    Iwai, Yoshio; Nagahara, Hajime; Yachida, Masahiko

    An omnidirectional vision is an imaging system that can capture a surrounding image in whole direction by using a hyperbolic mirror and a conventional CCD camera. This paper proposes a streaming server that can efficiently transfer movies captured by an omnidirectional vision system through the Internet. The proposed system uses multiple channels to deliver multiple movies synchronously. Through this method, the system enables clients to view the different direction of omnidirectional movies and also support the function to change the view are during playback period. Our evaluation experiments show that our proposed streaming server can effectively deliver multiple movies via multiple channels.

  10. Mathematical biomarkers for the autonomic regulation of cardiovascular system.

    PubMed

    Campos, Luciana A; Pereira, Valter L; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-10-07

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance.

  11. Mathematical biomarkers for the autonomic regulation of cardiovascular system.

    PubMed

    Campos, Luciana A; Pereira, Valter L; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-01-01

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance. PMID:24109456

  12. Cloud Absorption Radiometer Autonomous Navigation System - CANS

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Gatebe, Charles; McCune, Bill; Hellwig, Dustan

    2013-01-01

    CAR (cloud absorption radiometer) acquires spatial reference data from host aircraft navigation systems. This poses various problems during CAR data reduction, including navigation data format, accuracy of position data, accuracy of airframe inertial data, and navigation data rate. Incorporating its own navigation system, which included GPS (Global Positioning System), roll axis inertia and rates, and three axis acceleration, CANS expedites data reduction and increases the accuracy of the CAR end data product. CANS provides a self-contained navigation system for the CAR, using inertial reference and GPS positional information. The intent of the software application was to correct the sensor with respect to aircraft roll in real time based upon inputs from a precision navigation sensor. In addition, the navigation information (including GPS position), attitude data, and sensor position details are all streamed to a remote system for recording and later analysis. CANS comprises a commercially available inertial navigation system with integral GPS capability (Attitude Heading Reference System AHRS) integrated into the CAR support structure and data system. The unit is attached to the bottom of the tripod support structure. The related GPS antenna is located on the P-3 radome immediately above the CAR. The AHRS unit provides a RS-232 data stream containing global position and inertial attitude and velocity data to the CAR, which is recorded concurrently with the CAR data. This independence from aircraft navigation input provides for position and inertial state data that accounts for very small changes in aircraft attitude and position, sensed at the CAR location as opposed to aircraft state sensors typically installed close to the aircraft center of gravity. More accurate positional data enables quicker CAR data reduction with better resolution. The CANS software operates in two modes: initialization/calibration and operational. In the initialization/calibration mode

  13. Autonomous rendezvous targeting techniques for national launch system application

    NASA Technical Reports Server (NTRS)

    Lomas, James J.; Deaton, A. Wayne

    1991-01-01

    The rendezvous targeting techniques that can be utilized to achieve autonomous guidance for delivering a cargo to Space Station Freedom (SSF) using the National Launch System's (NLS) Heavy Lift Launch Vehicle (HLLV) and the on-orbit Cargo Transfer Vehicle (CTV) are described. This capability is made possible by advancements in autonomous navigation (Global Positioning System - GPS) on-board the CTV and SSF as well as the new generation flight computers. How the HLLV launch window can be decoupled from the CTV phasing window is described. The performance trades that have to be made to determine the length of the launch window and the phasing window between the CTV and SSF are identified and recommendations made that affect mission timelines.

  14. A modular real-time vision system for humanoid robots

    NASA Astrophysics Data System (ADS)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  15. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  16. Database integrity monitoring for synthetic vision systems using machine vision and SHADE

    NASA Astrophysics Data System (ADS)

    Cooper, Eric G.; Young, Steven D.

    2005-05-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  17. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  18. Robotic reactions: delay-induced patterns in autonomous vehicle systems.

    PubMed

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  19. Robotic reactions: Delay-induced patterns in autonomous vehicle systems

    NASA Astrophysics Data System (ADS)

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  20. Robotic reactions: delay-induced patterns in autonomous vehicle systems.

    PubMed

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation. PMID:20365620

  1. Autonomous Systems and Robotics: 2000-2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes technologies to monitor, maintain, and where possible, repair complex space systems. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.

  2. Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems

    NASA Technical Reports Server (NTRS)

    Bujorianu, Marius C.; Bujorianu, Manuela L.

    2009-01-01

    In this paper, we sketch a framework for interdisciplinary modeling of space systems, by proposing a holistic view. We consider different system dimensions and their interaction. Specifically, we study the interactions between computation, physics, communication, uncertainty and autonomy. The most comprehensive computational paradigm that supports a holistic perspective on autonomous space systems is given by cyber-physical systems. For these, the state of art consists of collaborating multi-engineering efforts that prompt for an adequate formal foundation. To achieve this, we propose a leveraging of the traditional content of formal modeling by a co-engineering process.

  3. A system architecture for a highly autonomous Mars rover

    NASA Astrophysics Data System (ADS)

    Rosenthal, Donald A.; Johnston, Mark D.

    1992-08-01

    A system to enable highly autonomous robotic exploration has been designed and prototyped. The system uses the concept of a centralized executive to control and coordinate the activities of the various component subsystems. The plan mediation, or coordination, is enabled by a highly efficient constraint-based scheduling system. This scheduler, which is the main component of the centralized executive, generates timelines that accommodate as many of the highest priority goals as possible. As plans are executed or tasks fail, and as new goals are received by the system, the executive continually reworks the schedule to reflect the current complement of plans and resource availability and utilization.

  4. On non-autonomous dynamical systems

    SciTech Connect

    Anzaldo-Meneses, A.

    2015-04-15

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  5. On non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2015-04-01

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  6. Disorders of the Autonomic Nervous System after Hemispheric Cerebrovascular Disorders: An Update

    PubMed Central

    Al-Qudah, Zaid A.; Yacoub, Hussam A.; Souayah, Nizar

    2015-01-01

    Autonomic and cardiac dysfunction may occur after vascular brain injury without any evidence of primary heart disease. During acute stroke, autonomic dysfunction, for example, elevated arterial blood pressure, arrhythmia, and ischemic cardiac damage, has been reported, which may hinder the prognosis. Autonomic dysfunction after a stroke may involve the cardiovascular, respiratory, sudomotor, and sexual systems, but the exact mechanism is not fully understood. In this review paper, we will discuss the anatomy and physiology of the autonomic nervous system and discuss the mechanism(s) suggested to cause autonomic dysfunction after stroke. We will further elaborate on the different cerebral regions involved in autonomic dysfunction complications of stroke. Autonomic nervous system modulation is emerging as a new therapeutic target for stroke management. Understanding the pathogenesis and molecular mechanism(s) of parasympathetic and sympathetic dysfunction after stroke will facilitate the implementation of preventive and therapeutic strategies to antagonize the clinical manifestation of autonomic dysfunction and improve the outcome of stroke. PMID:26576215

  7. The autonomic nervous system at high altitude

    PubMed Central

    Drinkhill, Mark J.; Rivera-Chira, Maria

    2007-01-01

    The effects of hypobaric hypoxia in visitors depend not only on the actual elevation but also on the rate of ascent. Sympathetic activity increases and there are increases in blood pressure and heart rate. Pulmonary vasoconstriction leads to pulmonary hypertension, particularly during exercise. The sympathetic excitation results from hypoxia, partly through chemoreceptor reflexes and partly through altered baroreceptor function. High pulmonary arterial pressures may also cause reflex systemic vasoconstriction. Most permanent high altitude dwellers show excellent adaptation although there are differences between populations in the extent of the ventilatory drive and the erythropoiesis. Some altitude dwellers, particularly Andeans, may develop chronic mountain sickness, the most prominent characteristic of which being excessive polycythaemia. Excessive hypoxia due to peripheral chemoreceptor dysfunction has been suggested as a cause. The hyperviscous blood leads to pulmonary hypertension, symptoms of cerebral hypoperfusion, and eventually right heart failure and death. PMID:17264976

  8. Navigation system for autonomous mapper robots

    NASA Astrophysics Data System (ADS)

    Halbach, Marc; Baudoin, Yvan

    1993-05-01

    This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.

  9. Autonomous system for pathogen detection and identification

    SciTech Connect

    Belgrader, P.; Benett, W.; Bergman, W.; Langlois, R.; Mariella, R.; Milanovich, F.; Miles, R.; Venkateswaran, K.; Long, G.; Nelson, W.

    1998-09-24

    This purpose of this project is to build a prototype instrument that will, running unattended, detect, identify, and quantify BW agents. In order to accomplish this, we have chosen to start with the world' s leading, proven, assays for pathogens: surface-molecular recognition assays, such as antibody-based assays, implemented on a high-performance, identification (ID)-capable flow cytometer, and the polymerase chain reaction (PCR) for nucleic-acid based assays. With these assays, we must integrate the capability to: l collect samples from aerosols, water, or surfaces; l perform sample preparation prior to the assays; l incubate the prepared samples, if necessary, for a period of time; l transport the prepared, incubated samples to the assays; l perform the assays; l interpret and report the results of the assays. Issues such as reliability, sensitivity and accuracy, quantity of consumables, maintenance schedule, etc. must be addressed satisfactorily to the end user. The highest possible sensitivity and specificity of the assay must be combined with no false alarms. Today, we have assays that can, in under 30 minutes, detect and identify simulants for BW agents at concentrations of a few hundred colony-forming units per ml of solution. If the bio-aerosol sampler of this system collects 1000 Ymin and concentrates the respirable particles into 1 ml of solution with 70% processing efficiency over a period of 5 minutes, then this translates to a detection/ID capability of under 0.1 agent-containing particle/liter of air.

  10. Autonomous Control Capabilities for Space Reactor Power Systems

    NASA Astrophysics Data System (ADS)

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-02-01

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission.

  11. Autonomous Control Capabilities for Space Reactor Power Systems

    SciTech Connect

    Wood, Richard T.; Neal, John S.; Brittain, C. Ray; Mullens, James A.

    2004-02-04

    The National Aeronautics and Space Administration's (NASA's) Project Prometheus, the Nuclear Systems Program, is investigating a possible Jupiter Icy Moons Orbiter (JIMO) mission, which would conduct in-depth studies of three of the moons of Jupiter by using a space reactor power system (SRPS) to provide energy for propulsion and spacecraft power for more than a decade. Terrestrial nuclear power plants rely upon varying degrees of direct human control and interaction for operations and maintenance over a forty to sixty year lifetime. In contrast, an SRPS is intended to provide continuous, remote, unattended operation for up to fifteen years with no maintenance. Uncertainties, rare events, degradation, and communications delays with Earth are challenges that SRPS control must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design. In this paper, we describe an autonomous control concept for generic SRPS designs. The formulation of an autonomous control concept, which includes identification of high-level functional requirements and generation of a research and development plan for enabling technologies, is among the technical activities that are being conducted under the U.S. Department of Energy's Space Reactor Technology Program in support of the NASA's Project Prometheus. The findings from this program are intended to contribute to the successful realization of the JIMO mission.

  12. Enabling autonomous control for space reactor power systems

    SciTech Connect

    Wood, R. T.

    2006-07-01

    The application of nuclear reactors for space power and/or propulsion presents some unique challenges regarding the operations and control of the power system. Terrestrial nuclear reactors employ varying degrees of human control and decision-making for operations and benefit from periodic human interaction for maintenance. In contrast, the control system of a space reactor power system (SRPS) employed for deep space missions must be able to accommodate unattended operations due to communications delays and periods of planetary occlusion while adapting to evolving or degraded conditions with no opportunity for repair or refurbishment. Thus, a SRPS control system must provide for operational autonomy. Oak Ridge National Laboratory (ORNL) has conducted an investigation of the state of the technology for autonomous control to determine the experience base in the nuclear power application domain, both for space and terrestrial use. It was found that control systems with varying levels of autonomy have been employed in robotic, transportation, spacecraft, and manufacturing applications. However, autonomous control has not been implemented for an operating terrestrial nuclear power plant nor has there been any experience beyond automating simple control loops for space reactors. Current automated control technologies for nuclear power plants are reasonably mature, and basic control for a SRPS is clearly feasible under optimum circumstances. However, autonomous control is primarily intended to account for the non optimum circumstances when degradation, failure, and other off-normal events challenge the performance of the reactor and near-term human intervention is not possible. Thus, the development and demonstration of autonomous control capabilities for the specific domain of space nuclear power operations is needed. This paper will discuss the findings of the ORNL study and provide a description of the concept of autonomy, its key characteristics, and a prospective

  13. Human vision simulation for evaluation of enhanced and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Doll, Theodore J.; Home, Richard; Cooke, Kevin J.; Wasilewski, Anthony A.; Sheerin, David T.; Hetzler, Morris C.

    2003-09-01

    One of the key problems in developing Enhanced and Synthetic Vision Systems is evaluating their effectiveness in enhancing human visual performance. A validated simulation of human vision would provide a means of avoiding costly and time-consuming testing of human observers. We describe an image-based simulation of human visual search, detection, and identification, and efforts to further validate and refine this simulation. One of the advantages of an image-based simulation is that it can predict performance for exactly the same visual stimuli seen by human operators. This makes it possible to assess aspects of the imagery, such as particular types and amounts of background clutter and sensor distortions, that are not usually considered in non-image based models. We present two validation studies - one showing that the simulation accurately predicts human color discrimination, and a second showing that it produces probabilities of detection (Pd's) that closely match Blackwell-type human threshold data.

  14. System safety analysis of an autonomous mobile robot

    SciTech Connect

    Bartos, R.J.

    1994-08-01

    Analysis of the safety of operating and maintaining the Stored Waste Autonomous Mobile Inspector (SWAMI) II in a hazardous environment at the Fernald Environmental Management Project (FEMP) was completed. The SWAMI II is a version of a commercial robot, the HelpMate{trademark} robot produced by the Transitions Research Corporation, which is being updated to incorporate the systems required for inspecting mixed toxic chemical and radioactive waste drums at the FEMP. It also has modified obstacle detection and collision avoidance subsystems. The robot will autonomously travel down the aisles in storage warehouses to record images of containers and collect other data which are transmitted to an inspector at a remote computer terminal. A previous study showed the SWAMI II has economic feasibility. The SWAMI II will more accurately locate radioactive contamination than human inspectors. This thesis includes a System Safety Hazard Analysis and a quantitative Fault Tree Analysis (FTA). The objectives of the analyses are to prevent potentially serious events and to derive a comprehensive set of safety requirements from which the safety of the SWAMI II and other autonomous mobile robots can be evaluated. The Computer-Aided Fault Tree Analysis (CAFTA{copyright}) software is utilized for the FTA. The FTA shows that more than 99% of the safety risk occurs during maintenance, and that when the derived safety requirements are implemented the rate of serious events is reduced to below one event per million operating hours. Training and procedures in SWAMI II operation and maintenance provide an added safety margin. This study will promote the safe use of the SWAMI II and other autonomous mobile robots in the emerging technology of mobile robotic inspection.

  15. Enhanced/Synthetic Vision Systems for Advanced Flight Decks

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Jenkins, James; Statler, Irving C. (Technical Monitor)

    1994-01-01

    One of the most challenging arenas for enhanced and synthetic vision systems is the flight deck. Here, pilots must perform active and supervisory control behaviors based on imagery generated in real time or transduced from imaging sensors. Although enhanced and synthetic vision technologies have been used in military vehicles for more than two decades, they have only recently been considered for civilian transport aircraft. In this paper we discuss the human performance issues still to be resolved for these systems, and consider the special constraints that must be considered for their use in the transport domain.

  16. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  17. Autonomous and Autonomic Swarms

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Truszkowski, Walter F.; Rouff, Christopher A.; Sterritt, Roy

    2005-01-01

    A watershed in systems engineering is represented by the advent of swarm-based systems that accomplish missions through cooperative action by a (large) group of autonomous individuals each having simple capabilities and no global knowledge of the group s objective. Such systems, with individuals capable of surviving in hostile environments, pose unprecedented challenges to system developers. Design and testing and verification at much higher levels will be required, together with the corresponding tools, to bring such systems to fruition. Concepts for possible future NASA space exploration missions include autonomous, autonomic swarms. Engineering swarm-based missions begins with understanding autonomy and autonomicity and how to design, test, and verify systems that have those properties and, simultaneously, the capability to accomplish prescribed mission goals. Formal methods-based technologies, both projected and in development, are described in terms of their potential utility to swarm-based system developers.

  18. Research on vision control system for inverted pendulum

    NASA Astrophysics Data System (ADS)

    Jin, Xiaolin; Bian, Yongming; Jiang, Jia; Li, Anhu; Jiang, Xuchun; Zhao, Fangwei

    2010-10-01

    This paper focuses on the study and experiment of vision control system for an inverted pendulum. To solve some key technical problems, the hardware platform and the software flow of the control system have been designed. The whole control system is composed of vision module and motion control module. The vision module is based on "CCD camera", the motion control module is based on "Motion Control Card, Servo Driver and Servo Motor", and the software is based on LabView. The main research contents and contributions of this paper are summarized as follows: (1) Analyze the functional requirements of the vision control system about the inverted pendulum, developing the hardware platform and planning the overall arrangement of the system; (2) Design the image processing flow and the recognition track process of the moving objects. The accurate position of the pendulum can be obtained from the image through the flow, which concludes image pretreatment, image segmentation and image post-processing; (3) Design the software structure of the control system and write the program code. It is convenient to update and maintain the control software due to the modularity of the system. Some key technical problems in the software have been solved, so the flexibility and reliability of the control system are improved; (4) Build the experimental platform and set the key parameters of the vision control system through experiments. It is proved that the chosen scheme of this paper is feasible. The experiment provides the basis for the development and application of the whole control system.

  19. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  20. Brain, mind, body and society: autonomous system in robotics.

    PubMed

    Shimoda, Motomu

    2013-12-01

    In this paper I examine the issues related to the robot with mind. To create a robot with mind aims to recreate neuro function by engineering. The robot with mind is expected not only to process external information by the built-in program and behave accordingly, but also to gain the consciousness activity responding multiple conditions and flexible and interactive communication skills coping with unknown situation. That prospect is based on the development of artificial intelligence in which self-organizing and self-emergent functions have been available in recent years. To date, controllable aspects in robotics have been restricted to data making and programming of cognitive abilities, while consciousness activities and communication skills have been regarded as uncontrollable aspects due to their contingency and uncertainty. However, some researchers of robotics claim that every activity of the mind can be recreated by engineering and is therefore controllable. Based on the development of the cognitive abilities of children and the findings of neuroscience, researchers have attempted to produce the latest artificial intelligence with autonomous learning systems. I conclude that controllability is inconsistent with autonomy in the genuine sense and autonomous robots recreated by engineering cannot be autonomous partners of humans. PMID:24558734

  1. Brain, mind, body and society: autonomous system in robotics.

    PubMed

    Shimoda, Motomu

    2013-12-01

    In this paper I examine the issues related to the robot with mind. To create a robot with mind aims to recreate neuro function by engineering. The robot with mind is expected not only to process external information by the built-in program and behave accordingly, but also to gain the consciousness activity responding multiple conditions and flexible and interactive communication skills coping with unknown situation. That prospect is based on the development of artificial intelligence in which self-organizing and self-emergent functions have been available in recent years. To date, controllable aspects in robotics have been restricted to data making and programming of cognitive abilities, while consciousness activities and communication skills have been regarded as uncontrollable aspects due to their contingency and uncertainty. However, some researchers of robotics claim that every activity of the mind can be recreated by engineering and is therefore controllable. Based on the development of the cognitive abilities of children and the findings of neuroscience, researchers have attempted to produce the latest artificial intelligence with autonomous learning systems. I conclude that controllability is inconsistent with autonomy in the genuine sense and autonomous robots recreated by engineering cannot be autonomous partners of humans.

  2. The role of the autonomic nervous system in Tourette Syndrome

    PubMed Central

    Hawksley, Jack; Cavanna, Andrea E.; Nagai, Yoko

    2015-01-01

    Tourette Syndrome (TS) is a neurodevelopmental disorder, consisting of multiple involuntary movements (motor tics) and one or more vocal (phonic) tics. It affects up to one percent of children worldwide, of whom about one third continue to experience symptoms into adulthood. The central neural mechanisms of tic generation are not clearly understood, however recent neuroimaging investigations suggest impaired cortico-striato-thalamo-cortical activity during motor control. In the current manuscript, we will tackle the relatively under-investigated role of the peripheral autonomic nervous system, and its central influences, on tic activity. There is emerging evidence that both sympathetic and parasympathetic nervous activity influences tic expression. Pharmacological treatments which act on sympathetic tone are often helpful: for example, Clonidine (an alpha-2 adrenoreceptor agonist) is often used as first choice medication for treating TS in children due to its good tolerability profile and potential usefulness for co-morbid attention-deficit and hyperactivity disorder. Clonidine suppresses sympathetic activity, reducing the triggering of motor tics. A general elevation of sympathetic tone is reported in patients with TS compared to healthy people, however this observation may reflect transient responses coupled to tic activity. Thus, the presence of autonomic impairments in patients with TS remains unclear. Effect of autonomic afferent input to cortico-striato-thalamo-cortical circuit will be discussed schematically. We additionally review how TS is affected by modulation of central autonomic control through biofeedback and Vagus Nerve Stimulation (VNS). Biofeedback training can enable a patient to gain voluntary control over covert physiological responses by making these responses explicit. Electrodermal biofeedback training to elicit a reduction in sympathetic tone has a demonstrated association with reduced tic frequency. VNS, achieved through an implanted device

  3. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  4. Component-Oriented Behavior Extraction for Autonomic System Design

    NASA Technical Reports Server (NTRS)

    Bakera, Marco; Wagner, Christian; Margaria,Tiziana; Hinchey, Mike; Vassev, Emil; Steffen, Bernhard

    2009-01-01

    Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate properties of the ESA Exo- Mars Rover in depth. We show here how to enable GEAR s game-based verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission.

  5. Role of the autonomic nervous system in tumorigenesis and metastasis

    PubMed Central

    Magnon, Claire

    2015-01-01

    Convergence of multiple stromal cell types is required to develop a tumorigenic niche that nurtures the initial development of cancer and its dissemination. Although the immune and vascular systems have been shown to have strong influences on cancer, a growing body of evidence points to a role of the nervous system in promoting cancer development. This review discusses past and current research that shows the intriguing role of autonomic nerves, aided by neurotrophic growth factors and axon cues, in creating a favorable environment for the promotion of tumor formation and metastasis. PMID:27308436

  6. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  7. Experimental study on a smart wheelchair system using a combination of stereoscopic and spherical vision.

    PubMed

    Nguyen, Jordan S; Su, Steven W; Nguyen, Hung T

    2013-01-01

    This paper is concerned with the experimental study performance of a smart wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses a unique camera configuration for vision. Included in this configuration are stereoscopic cameras for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, and a spherical camera system for 360-degrees of monocular vision. The camera combination provides obstacle detection and mapping in unknown environments during real-time autonomous navigation of the wheelchair. With the integration of hands-free wheelchair control technology, designed as control methods for people with severe physical disability, the smart wheelchair system can assist the user with automated guidance during navigation. An experimental study on this system was conducted with a total of 10 participants, consisting of 8 able-bodied subjects and 2 tetraplegic (C-6 to C-7) subjects. The hands-free control technologies utilized for this testing were a head-movement controller (HMC) and a brain-computer interface (BCI). The results showed the assistance of TIM's automated guidance system had a statistically significant reduction effect (p-value = 0.000533) on the completion times of the obstacle course presented in the experimental study, as compared to the test runs conducted without the assistance of TIM.

  8. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  9. Systems, methods and apparatus for quiesence of autonomic safety devices with self action

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments an autonomic environmental safety device may be quiesced. In at least one embodiment, a method for managing an autonomic safety device, such as a smoke detector, based on functioning state and operating status of the autonomic safety device includes processing received signals from the autonomic safety device to obtain an analysis of the condition of the autonomic safety device, generating one or more stay-awake signals based on the functioning status and the operating state of the autonomic safety device, transmitting the stay-awake signal, transmitting self health/urgency data, and transmitting environment health/urgency data. A quiesce component of an autonomic safety device can render the autonomic safety device inactive for a specific amount of time or until a challenging situation has passed.

  10. Development of a machine vision guidance system for automated assembly of space structures

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Sydow, P. Daniel

    1992-01-01

    The topics are presented in viewgraph form and include: automated structural assembly robot vision; machine vision requirements; vision targets and hardware; reflective efficiency; target identification; pose estimation algorithms; triangle constraints; truss node with joint receptacle targets; end-effector mounted camera and light assembly; vision system results from optical bench tests; and future work.

  11. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  12. Measuring Cardiac Autonomic Nervous System (ANS) Activity in Children

    PubMed Central

    van Eijsden, Manon; Gemke, Reinoud J. B. J.; Vrijkotte, Tanja G. M.; de Geus, Eco J.

    2013-01-01

    The autonomic nervous system (ANS) controls mainly automatic bodily functions that are engaged in homeostasis, like heart rate, digestion, respiratory rate, salivation, perspiration and renal function. The ANS has two main branches: the sympathetic nervous system, preparing the human body for action in times of danger and stress, and the parasympathetic nervous system, which regulates the resting state of the body. ANS activity can be measured invasively, for instance by radiotracer techniques or microelectrode recording from superficial nerves, or it can be measured non-invasively by using changes in an organ's response as a proxy for changes in ANS activity, for instance of the sweat glands or the heart. Invasive measurements have the highest validity but are very poorly feasible in large scale samples where non-invasive measures are the preferred approach. Autonomic effects on the heart can be reliably quantified by the recording of the electrocardiogram (ECG) in combination with the impedance cardiogram (ICG), which reflects the changes in thorax impedance in response to respiration and the ejection of blood from the ventricle into the aorta. From the respiration and ECG signals, respiratory sinus arrhythmia can be extracted as a measure of cardiac parasympathetic control. From the ECG and the left ventricular ejection signals, the preejection period can be extracted as a measure of cardiac sympathetic control. ECG and ICG recording is mostly done in laboratory settings. However, having the subjects report to a laboratory greatly reduces ecological validity, is not always doable in large scale epidemiological studies, and can be intimidating for young children. An ambulatory device for ECG and ICG simultaneously resolves these three problems. Here, we present a study design for a minimally invasive and rapid assessment of cardiac autonomic control in children, using a validated ambulatory device 1-5, the VU University Ambulatory Monitoring System (VU

  13. Vision system for combustion analysis and diagnosis in gas turbines

    NASA Astrophysics Data System (ADS)

    Sassi, Giancarlo; Corbani, Franco; Graziadio, Mario; Novelli, Giuliano

    1995-09-01

    This paper describes the flame vision system developed by CISE, on behalf of Thermical Research Division of ENEL, allowing a non-intrusive analysis and a probabilistic classification of the combustion process inside the gas turbines. The system is composed of a vision probe, designed for working in hostile environments and installed inside the combustion chamber, an optical element housing a videocamera, and a personal computer equipped with a frame grabber board. The main goal of the system is the flames classification in order to evaluate the occurrency of deviation from the optimal combustion conditions and to generate warning messages for power plant personnel. This is obtained by comparing some geometrical features (baricenter, inertia axes, area, orientation, etc.) extracted from flame area of images with templates found out during the training stage and classifying them in a probabilistic way by using a Bayesian algorithm. The vision system, now at the test stage, is intended to be a useful tool for combustion monitoring, has turbines set-up, periodic survey, and for collecting information concerning the burner efficiency and reliability; moreover the vision probe flexibility allows other applications as particle image velocimetry, spectral and thermal analysis.

  14. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA.../Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems...

  15. Supervised autonomous rendezvous and docking system technology evaluation

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.

    1991-01-01

    Technology for manned space flight is mature and has an extensive history of the use of man-in-the-loop rendezvous and docking, but there is no history of automated rendezvous and docking. Sensors exist that can operate in the space environment. The Shuttle radar can be used for ranges down to 30 meters, Japan and France are developing laser rangers, and considerable work is going on in the U.S. However, there is a need to validate a flight qualified sensor for the range of 30 meters to contact. The number of targets and illumination patterns should be minimized to reduce operation constraints with one or more sensors integrated into a robust system for autonomous operation. To achieve system redundancy, it is worthwhile to follow a parallel development of qualifying and extending the range of the 0-12 meter MSFC sensor and to simultaneously qualify the 0-30(+) meter JPL laser ranging system as an additional sensor with overlapping capabilities. Such an approach offers a redundant sensor suite for autonomous rendezvous and docking. The development should include the optimization of integrated sensory systems, packaging, mission envelopes, and computer image processing to mimic brain perception and real-time response. The benefits of the Global Positioning System in providing real-time positioning data of high accuracy must be incorporated into the design. The use of GPS-derived attitude data should be investigated further and validated.

  16. Technology forecast and applications for autonomous, intelligent systems

    NASA Astrophysics Data System (ADS)

    Lum, Henry; Heer, Ewald

    Since 1984, the National Aeronautics and Space Administration's (NASA) Office of Aeronautics and Space Technology (OAST) has aggressively supported a research and development program for the development and demonstration of autonomous system technologies for aerospace applications. Significant research products are emerging from this program and have been evaluated in various space science and aerospace mission environments. Systems technology demonstrations such as the Space Station Thermal Control System, the Space Shuttle Integrated Communications Officer Station for ground mission operations, and the Space Shuttle Launch Processing Facility are being conducted in mission operations environments this year and have already provided additional focus and research directions to the OAST Systems Autonomy Technology Program. The preliminary software for the system technology demonstrations has been integrated into the respective operational environments for the demonstrations scheduled for late 1988. The results obtained to date have changed the original direction and focus of the Systems Autonomy Technology Program.

  17. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    SciTech Connect

    Pin, F.G.; de Saussure, G.; Spelt, P.F.; Killough, S.M.; Weisbin, C.R.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioning of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.

  18. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  19. Autonomic nervous system testing may not distinguish multiple system atrophy from Parkinson's disease

    PubMed Central

    Riley, D; Chelimsky, T

    2003-01-01

    Background: Formal laboratory testing of autonomic function is reported to distinguish between patients with Parkinson's disease and those with multiple system atrophy (MSA), but such studies segregate patients according to clinical criteria that select those with autonomic dysfunction for the MSA category. Objective: To characterise the profiles of autonomic disturbances in patients in whom the diagnosis of Parkinson's disease or MSA used criteria other than autonomic dysfunction. Methods: 47 patients with parkinsonism and autonomic symptoms who had undergone autonomic laboratory testing were identified and their case records reviewed for non-autonomic features. They were classified clinically into three diagnostic groups: Parkinson's disease (19), MSA (14), and uncertain (14). The performance of the patients with Parkinson's disease was compared with that of the MSA patients on five autonomic tests: RR variation on deep breathing, heart rate changes with the Valsalva manoeuvre, tilt table testing, the sudomotor axon reflex test, and thermoregulatory sweat testing. Results: None of the tests distinguished one group from the other with any statistical significance, alone or in combination. Parkinson's disease and MSA patients showed similar patterns of autonomic dysfunction on formal testing of cardiac sympathetic and parasympathetic, vasomotor, and central and peripheral sudomotor functions. Conclusions: This study supports the clinical observation that Parkinson's disease is often indistinguishable from MSA when it involves the autonomic nervous system. The clinical combination of parkinsonism and dysautonomia is as likely to be caused by Parkinson's disease as by MSA. Current clinical criteria for Parkinson's disease and MSA that direct patients with dysautonomia into the MSA group may be inappropriate. PMID:12486267

  20. Autonomic dysfunction and microvascular damage in systemic sclerosis.

    PubMed

    Di Franco, Manuela; Paradiso, Michele; Riccieri, Valeria; Basili, Stefania; Mammarella, Antonio; Valesini, Guido

    2007-08-01

    Systemic sclerosis (SSc) is a connective tissue disease characterized by vascular damage and interstizial fibrosis of many organs. Our interest was focused on the evaluation of cardiac autonomic function by measurements of heart rate variability (HRV) and microvascular damage detected by nailfold capillaroscopy (NC) in SSc patients. We examined 25 consecutive outpatients affected by systemic sclerosis and 25 healthy controls. Exclusion criteria were the presence of cardiac disease, hypertension, diabetes mellitus, or neurological diseases. All subjects underwent 24-h ambulatory ECG Holter recording and NC examination. Heart rate variability was evaluated in the time domain, using appropriate software, computing the time series of all normal-to-normal (NN) QRS intervals throughout the 24-h recording period. A semiquantitative rating scale was adopted to score the NC abnormalities, as well as a rating system for avascular areas and morphological NC patterns. In SSc patients, HRV analysis showed significantly lower values of SDNN (standard deviation of all NN intervals) (p=0.009), SDANN (standard deviation of the averages of NN intervals in all 5-min segments of the entire recording) (p=0.01), and pNN50 (the percentage of adjacent NN intervals that differed by more than 50 ms) (p=0.02), compared to the control group. These parameters in SSc patients significantly decreased with the worsening of semiquantitative capillaroscopy score. In conclusion, an abnormal autonomic nervous control of the heart might contribute to identify subclinical cardiac involvement in SSc patients. The coexistence of autonomic dysfunction with a more severe microvascular damage could be considered a potential prognostic tool in the identification of those patients particularly at risk for cardiac mortality.

  1. Autonomous Pathogen Detection System - FY02 Annual Progress Report

    SciTech Connect

    Colston, B; Brown, S; Burris, K; Elkin, C; Hindson, B; Langlois, R; Masquelier, D; McBride, M; Metz, T; Nasarabadi, S; Makarewicz, T; Milznovich, F; Venkateswaran, K S; Visuri, S

    2002-11-11

    The objective of this project is to design, fabricate and field demonstrate a biological agent detection and identification capability, the Autonomous Pathogen Detector System (APDS). Integrating a flow cytometer and real-time polymerase chain reaction (PCR) detector with sample collection, sample preparation and fluidics will provide a compact, autonomously operating instrument capable of simultaneously detecting multiple pathogens and/or toxins. The APDS will operate in fixed locations, continuously monitoring air samples and automatically reporting the presence of specific biological agents. The APDS will utilize both multiplex immunoassays and nucleic acid assays to provide ''quasi-orthogonal'' multiple agent detection approaches to minimize false positives and increase the reliability of identification. Technical advances across several fronts must occur, however, to realize the full extent of the APDS. The end goal of a commercially available system for civilian biological weapon defense will be accomplished through three progressive generations of APDS instruments. The APDS is targeted for civilian applications in which the public is at high risk of exposure to covert releases of bioagent, such as major subway systems and other transportation terminals, large office complexes and convention centers. APDS is also designed to be part of a monitoring network of sensors integrated with command and control systems for wide-area monitoring of urban areas and major public gatherings. In this latter application there is potential that a fully developed APDS could add value to DoD monitoring architectures.

  2. G2 Autonomous Control for Cryogenic Delivery Systems

    NASA Technical Reports Server (NTRS)

    Dito, Scott J.

    2014-01-01

    The Independent System Health Management-Autonomous Control (ISHM-AC) application development for cryogenic delivery systems is intended to create an expert system that will require minimal operator involvement and ultimately allow for complete autonomy when fueling a space vehicle in the time prior to launch. The G2-Autonomous Control project is the development of a model, simulation, and ultimately a working application that will control and monitor the cryogenic fluid delivery to a rocket for testing purposes. To develop this application, the project is using the programming language/environment Gensym G2. The environment is an all-inclusive application that allows development, testing, modeling, and finally operation of the unique application through graphical and programmatic methods. We have learned G2 through training classes and subsequent application development, and are now in the process of building the application that will soon be used to test on cryogenic loading equipment here at the Kennedy Space Center Cryogenics Test Laboratory (CTL). The G2 ISHM-AC application will bring with it a safer and more efficient propellant loading system for the future launches at Kennedy Space Center and eventually mobile launches from all over the world.

  3. The Spacecraft Emergency Response System (SERS) for Autonomous Mission Operations

    NASA Technical Reports Server (NTRS)

    Breed, Julia; Chu, Kai-Dee; Baker, Paul; Starr, Cynthia; Fox, Jeffrey; Baitinger, Mick

    1998-01-01

    Today, most mission operations are geared toward lowering cost through unmanned operations. 7-day/24-hour operations are reduced to either 5-day/8-hour operations or become totally autonomous, especially for deep-space missions. Proper and effective notification during a spacecraft emergency could mean success or failure for an entire mission. The Spacecraft Emergency Response System (SERS) is a tool designed for autonomous mission operations. The SERS automatically contacts on-call personnel as needed when crises occur, either on-board the spacecraft or within the automated ground systems. Plus, the SERS provides a group-ware solution to facilitate the work of the person(s) contacted. The SERS is independent of the spacecraft's automated ground system. It receives and catalogues reports for various ground system components in near real-time. Then, based on easily configurable parameters, the SERS determines whom, if anyone, should be alerted. Alerts may be issued via Sky-Tel 2-way pager, Telehony, or e-mail. The alerted personnel can then review and respond to the spacecraft anomalies through the Netscape Internet Web Browser, or directly review and respond from the Sky-Tel 2-way pager.

  4. Imagery and the autonomic nervous system: some methodological issues.

    PubMed

    Di Giusto, E L; Bond, N W

    1979-04-01

    The present paper is concerned with the role played by image content in the mediation of autonomic nervous system (ANS) arousal. The minimum methodological requirements of such studies are described including controls for imaging, image content, and expectancy effects. Studies meeting these requirements are then reviewed. It is concluded that image content can be a significant modifier of ANS arousal and that this property is not restricted to images containing affective, e.g., phobic, content. These conclusions have relevance to research into techniques such as biofeedback, Transcendental Meditation, and progressive relaxation, where imagery many have a profound influence but where it has received little direct empirical attentiol.

  5. Verification and Validation of Model-Based Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles; Koga, Dennis (Technical Monitor)

    2001-01-01

    This paper presents a three year project (FY99 to FY01) on the verification and validation of model based autonomous systems. The topics include: 1) Project Profile; 2) Model-Based Autonomy; 3) The Livingstone MIR; 4) MPL2SMV; 5) Livingstone to SMV Translation; 6) Symbolic Model Checking; 7) From Livingstone Models to SMV Models; 8) Application In-Situ Propellant Production; 9) Closed-Loop Verification Principle; 10) Livingstone PathFinder (LPF); 11) Publications and Presentations; and 12) Future Directions. This paper is presented in viewgraph form.

  6. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  7. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Huegli, Heinz

    2001-01-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits us to accommodate a broad range of inspection requirements, is, however, limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high-dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented. They show the effectiveness of the presented approach, and validate the configuration assistant as well.

  8. Development and testing of the EVS 2000 enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.

  9. Head-aimed vision system improves tele-operated mobility

    NASA Astrophysics Data System (ADS)

    Massey, Kent

    2004-12-01

    A head-aimed vision system greatly improves the situational awareness and decision speed for tele-operations of mobile robots. With head-aimed vision, the tele-operator wears a head-mounted display and a small three axis head-position measuring device. Wherever the operator looks, the remote sensing system "looks". When the system is properly designed, the operator's occipital lobes are "fooled" into believing that the operator is actually on the remote robot. The result is at least a doubling of: situational awareness, threat identification speed, and target tracking ability. Proper system design must take into account: precisely matching fields of view; optical gain; and latency below 100 milliseconds. When properly designed, a head-aimed system does not cause nausea, even with prolonged use.

  10. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  11. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Federal Aviation Administration Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions... of Transportation (DOT). ACTION: Notice of meeting RTCA Special Committee 213, Enhanced Flight... public of the eighteenth meeting of RTCA Special Committee 213, Enhanced Flight Visions...

  12. Road boundary detection for autonomous vehicle navigation

    SciTech Connect

    Davis, L.S.; Kushner, T.R.; LeMoigne, J.J.; Waxman, A.M.

    1986-03-01

    The Computer Vision Laboratory at the University Maryland for the past year has been developing a computer vision system for autonomous ground navigation of roads and road networks for the Defense Advanced Research Projects Agency's Strategic Computing Program. The complete system runs on a VAX 11/785, but certain parts of it have been reimplemented on a VICOM image processing sysem for experimentation on an autonomous vehicle built for the Martin Marietta Corp., Aerospace Division, in Denver, Colorado. A brief overview is given of the principal software components of the system and the VICOM implementation in detail.

  13. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  14. FLILO (flying infrared for low-level operations): an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Guell, Jeff J.

    2000-06-01

    FLILO is an Enhanced Vision System (EVS); which enhances Situational Awareness for safe low level/night time and moderate weather flight operations (including: take- off/landing, taxiing, approaches, drop zone identification, Short Austere Air Field operations, etc), by providing electronic/real time vision to the pilots. It consists of a series of imaging sensors, an Image Processor and a wide field-of-view (FOV) see-through Helmet Mounted Display (HMD) integrated with a Head Tracker. The current solution for safe night time/low level military flight operations is the use of the Turret-FLIR (Forward-Looking InfraRed). This system requires an additional operator/crew member (navigator) who controls the Turret's movement and relays the information to the pilots. The image is presented on a Head-Down-Display. FLILO presents the information directly to the pilots on an HMD, therefore each pilot has an independent view controlled by their heads position, while utilizing the same sensors that are static and fixed to the aircraft structure. Since there are no moving parts, the system provides high reliability, while remaining more affordable than the Turret-FLIR solution. FLILO does not require a ball-turret, therefore there is no extra drag or range impact on the aircraft's performance. Furthermore, with future use of real-time multi-band/multi-sensor image fusion, FLILO is the right step towards obtaining safe autonomous landing guidance/0-0 flight operations capability.

  15. A machine vision system for the calibration of digital thermometers

    NASA Astrophysics Data System (ADS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Martín, Fernando; Formella, Arno; Alvarez-Valado, Victor

    2009-06-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians.

  16. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  17. Autonomously acquiring declarative and procedural knowledge for ICAT systems

    NASA Technical Reports Server (NTRS)

    Kovarik, Vincent J., Jr.

    1993-01-01

    The construction of Intelligent Computer Aided Training (ICAT) systems is critically dependent on the ability to define and encode knowledge. This knowledge engineering effort can be broadly divided into two categories: domain knowledge and expert or task knowledge. Domain knowledge refers to the physical environment or system with which the expert interacts. Expert knowledge consists of the set of procedures and heuristics employed by the expert in performing their task. Both these areas are a significant bottleneck in the acquisition of knowledge for ICAT systems. This paper presents a research project in the area of autonomous knowledge acquisition using a passive observation concept. The system observes an expert and then generalizes the observations into production rules representing the domain expert's knowledge.

  18. Method and system for providing autonomous control of a platform

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)

    2012-01-01

    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).

  19. Non-autonomous lattice systems with switching effects and delayed recovery

    NASA Astrophysics Data System (ADS)

    Han, Xiaoying; Kloeden, Peter E.

    2016-09-01

    The long term behavior of a type of non-autonomous lattice dynamical systems is investigated, where these have a diffusive nearest neighborhood interaction and discontinuous reaction terms with recoverable delays. This problem is of both biological and mathematical interests, due to its application in systems of excitable cells as well as general biological systems involving delayed recovery. The problem is formulated as an evolution inclusion with delays and the existence of weak and strong solutions is established. It is then shown that the solutions generate a set-valued non-autonomous dynamical system and that this non-autonomous dynamical system possesses a non-autonomous global pullback attractor.

  20. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  1. Enhanced and synthetic vision system (ESVS) flight demonstration

    NASA Astrophysics Data System (ADS)

    Sanders-Reed, John N.; Bernier, Ken; Güell, Jeff

    2008-04-01

    Boeing has developed and flight demonstrated a distributed aperture enhanced and synthetic vision system for integrated situational awareness. The system includes 10 sensors, 2 simultaneous users with head mounted displays (one via a wireless remote link), and intelligent agents for hostile fire detection, ground moving target detection and tracking, and stationary personnel and vehicle detection. Flight demonstrations were performed in 2006 and 2007 on a MD-530 "Little Bird" helicopter.

  2. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    PubMed Central

    2010-01-01

    Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic

  3. Autonomic and Climatic Impacts on the Dutch Coastal Groundwater System

    NASA Astrophysics Data System (ADS)

    van Baaren, E. S.; Oude Essink, G. H.

    2008-12-01

    Half of the Netherlands is located below sea level and still land subsidence is taking place. As saline groundwater is found within a couple of meters below ground surface, salinization of the freshwater resources is taking place. Above mentioned process together with anthropogenic activities like groundwater exploitation and differentiated water level management is called the autonomic process. As a consequence, salt seepage affects the quality of surface water and reduces the freshwater volume necessary for drinking, environmental, industrial and agricultural purposes. Apart from this autonomic process, the Dutch delta will be jeopardized by climate change due to two effects: sea level rise and a combination of changing precipitation and evapotranspiration. Calculations with a regional density dependent 3D model for the coastal province of Zuid-Holland show increasing piezometric heads for all implemented climate scenarios due to sea level rise. This will, however, only happen at areas less than 10-20 km from the coastline or large rivers. Up to 5 km from the coast, the piezometric heads will increase with more than 50% of the sea level rise. In the inland areas, land subsidence causes decreasing piezometric heads. Salinization of the groundwater system will take place in most parts of the Dutch delta. Around the islands of Zuid-Holland, the main cause for salinization is sea level rise. The autonomic process on the other hand dominates the salinization of the polders. Due to increasing piezometric heads and salinization, the salt seepage will increase up to 20% for inland polders and up to 75% for coastal polders. The effects of the changes in recharge and evapotranspiration are small in general and depend on the climate scenario and area. Adaptive and mitigative activities like land reclamation offshore and desalinization of saline groundwater show some positive effects on the chloride concentrations of the groundwater. Nevertheless, this cannot reverse the

  4. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  5. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  6. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban

    2005-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this project is to develop a technology for on-line corrosion monitoring based on a new concept. This objective is to be achieved by a laboratory development of the sensor and instrumentation, testing of the measurement system in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. The initial plan for testing at the coal-fired pilot-scale furnace was replaced by testing in a power plant, because the operation condition at the power plant is continuous and more stable. The first two-year effort was completed with the successful development sensor and measurement system, and successful testing in a muffle furnace. Because of the potential high cost in sensor fabrication, a different type of sensor was used and tested in a power plant burning eastern bituminous coals. This report summarize the experiences and results of the first two years of the three-year project, which include laboratory

  7. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  8. Development of a distributed vision system for industrial conditions

    NASA Astrophysics Data System (ADS)

    Weiss, Michael; Schiller, Arnulf; O'Leary, Paul; Fauster, Ewald; Schalk, Peter

    2003-04-01

    This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO"s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.

  9. Image processing in an enhanced and synthetic vision system

    NASA Astrophysics Data System (ADS)

    Mueller, Rupert M.; Palubinskas, Gintautas; Gemperlein, Hans

    2002-07-01

    'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.

  10. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  11. Forest fire autonomous decision system based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Lu, Jianhua

    2010-11-01

    The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.

  12. Forest fire autonomous decision system based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Lu, Jianhua

    2009-09-01

    The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.

  13. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  14. Sensing, Control, and System Integration for Autonomous Vehicles: A Series of Challenges

    NASA Astrophysics Data System (ADS)

    Özgüner, Ümit; Redmill, Keith

    One of the important examples of mechatronic systems can be found in autonomous ground vehicles. Autonomous ground vehicles provide a series of challenges in sensing, control and system integration. In this paper we consider off-road autonomous vehicles, automated highway systems and urban autonomous driving and indicate the unifying aspects. We specifically consider our own experience during the last twelve years in various demonstrations and challenges in attempting to identify unifying themes. Such unifying themes can be observed in basic hierarchies, hybrid system control approaches and sensor fusion techniques.

  15. The Systemic Vision of the Educational Learning

    ERIC Educational Resources Information Center

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  16. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    SciTech Connect

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  17. HMD digital night vision system for fixed wing fighters

    NASA Astrophysics Data System (ADS)

    Foote, Bobby D.

    2013-05-01

    Digital night sensor technology offers both advantages and disadvantages over standard analog systems. As the digital night sensor technology matures and disadvantages are overcome, the transition away from analog type sensors will increase with new programs. In response to this growing need RCEVS is actively investing in digital night vision systems that will provide the performance needed for the future. Rockwell Collins and Elbit Systems of America continue to invest in digital night technology and have completed laboratory, ground and preliminary flight testing to evaluate the important key factors for night vision. These evaluations have led to a summary of the maturity of the digital night capability and status of the key performance gap between analog and digital systems. Introduction of Digital Night Vision Systems can be found in the roadmap of future fixed wing and rotorcraft programs beginning in 2015. This will bring a new set of capabilities to the pilot that will enhance his abilities to perform night operations with no loss of performance.

  18. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  19. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  20. Motor execution detection based on autonomic nervous system responses.

    PubMed

    Marchal-Crespo, Laura; Zimmermann, Raphael; Lambercy, Olivier; Edelmann, Janis; Fluet, Marie-Christine; Wolf, Martin; Gassert, Roger; Riener, Robert

    2013-01-01

    Triggered assistance has been shown to be a successful robotic strategy for provoking motor plasticity, probably because it requires neurologic patients' active participation to initiate a movement involving their impaired limb. Triggered assistance, however, requires sufficient residual motor control to activate the trigger and, thus, is not applicable to individuals with severe neurologic injuries. In these situations, brain and body-computer interfaces have emerged as promising solutions to control robotic devices. In this paper, we investigate the feasibility of a body-machine interface to detect motion execution only monitoring the autonomic nervous system (ANS) response. Four physiological signals were measured (blood pressure, breathing rate, skin conductance response and heart rate) during an isometric pinching task and used to train a classifier based on hidden Markov models. We performed an experiment with six healthy subjects to test the effectiveness of the classifier to detect rest and active pinching periods. The results showed that the movement execution can be accurately classified based only on peripheral autonomic signals, with an accuracy level of 84.5%, sensitivity of 83.8% and specificity of 85.2%. These results are encouraging to perform further research on the use of the ANS response in body-machine interfaces. PMID:23248174

  1. Autonomous Flight Safety System September 27, 2005, Aircraft Test

    NASA Technical Reports Server (NTRS)

    Simpson, James C.

    2005-01-01

    This report describes the first aircraft test of the Autonomous Flight Safety System (AFSS). The test was conducted on September 27, 2005, near Kennedy Space Center (KSC) using a privately-owned single-engine plane and evaluated the performance of several basic flight safety rules using real-time data onboard a moving aerial vehicle. This test follows the first road test of AFSS conducted in February 2005 at KSC. AFSS is a joint KSC and Wallops Flight Facility (WEF) project that is in its third phase of development. AFSS is an independent subsystem intended for use with Expendable Launch Vehicles that uses tracking data from redundant onboard sensors to autonomously make flight termination decisions using software-based rules implemented on redundant flight processors. The goals of this project are to increase capabilities by allowing launches from locations that do not have or cannot afford extensive ground-based range safety assets, to decrease range costs, and to decrease reaction time for special situations. The mission rules are configured for each operation by the responsible Range Safety authorities and can be loosely categorized in four major categories: Parameter Threshold Violations, Physical Boundary Violations present position and instantaneous impact point (TIP), Gate Rules static and dynamic, and a Green-Time Rule. Examples of each of these rules were evaluated during this aircraft test.

  2. Low Temperature Shape Memory Alloys for Adaptive, Autonomous Systems Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Williams, Martha; Benafan, Othmane; Fesmire, James

    2015-01-01

    The objective of this joint activity between Kennedy Space Center (KSC) and Glenn Research Center (GRC) is to develop and evaluate the applicability of 2-way SMAs in proof-of-concept, low-temperature adaptive autonomous systems. As part of this low technology readiness (TRL) activity, we will develop and train low-temperature novel, 2-way shape memory alloys (SMAs) with actuation temperatures ranging from 0 C to 150 C. These experimental alloys will also be preliminary tested to evaluate their performance parameters and transformation (actuation) temperatures in low- temperature or cryogenic adaptive proof-of-concept systems. The challenge will be in the development, design, and training of the alloys for 2-way actuation at those temperatures.

  3. High accuracy autonomous navigation using the global positioning system (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  4. A VISION of Advanced Nuclear System Cost Uncertainty

    SciTech Connect

    J'Tia Taylor; David E. Shropshire; Jacob J. Jacobson

    2008-08-01

    VISION (VerifIable fuel cycle SImulatiON) is the Advanced Fuel Cycle Initiative’s and Global Nuclear Energy Partnership Program’s nuclear fuel cycle systems code designed to simulate the US commercial reactor fleet. The code is a dynamic stock and flow model that tracks the mass of materials at the isotopic level through the entire nuclear fuel cycle. As VISION is run, it calculates the decay of 70 isotopes including uranium, plutonium, minor actinides, and fission products. VISION.ECON is a sub-model of VISION that was developed to estimate fuel cycle and reactor costs. The sub-model uses the mass flows generated by VISION for each of the fuel cycle functions (referred to as modules) and calculates the annual cost based on cost distributions provided by the Advanced Fuel Cycle Cost Basis Report1. Costs are aggregated for each fuel cycle module, and the modules are aggregated into front end, back end, recycling, reactor, and total fuel cycle costs. The software also has the capability to perform system sensitivity analysis. This capability may be used to analyze the impacts on costs due to system uncertainty effects. This paper will provide a preliminary evaluation of the cost uncertainty affects attributable to 1) key reactor and fuel cycle system parameters and 2) scheduling variations. The evaluation will focus on the uncertainty on the total cost of electricity and fuel cycle costs. First, a single light water reactor (LWR) using mixed oxide fuel is examined to ascertain the effects of simple parameter changes. Three system parameters; burnup, capacity factor and reactor power are varied from nominal cost values and the affect on the total cost of electricity is measured. These simple parameter changes are measured in more complex scenarios 2-tier systems including LWRs with mixed fuel and fast recycling reactors using transuranic fuel. Other system parameters are evaluated and results will be presented in the paper. Secondly, the uncertainty due to

  5. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  6. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  7. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  8. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-01

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  9. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  10. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  11. Smart LED light source driver for machine vision system

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2008-02-01

    The unique properties of LEDs offer significant advantages in terms of lifetime, intensity and color control, response time and efficiency, all of which are very important for illumination in machine vision applications. However, there are some drawbacks of LEDs, such as the high thermal dependency and temporal degradation of the intensity and color. Dealing with these drawbacks requires complex LED drivers, which are able to compensate for the abovementioned changes in the intensity and color, thereby maintaining higher stability over a wide range of ambient temperature throughout the lifetime of a LED light source. Moreover, state-of-the-art machine vision systems usually consist of a large number of independent LED light sources that enable real-time switching between different illumination setups at frequencies of up to 100 kHz. In this paper, we discuss the concepts of smart LED drivers with the emphasis on the flexibility and applicability. All the most important characteristics are being considered and discussed in detail: the accurate generation of high frequency waveforms, the efficiency of the current driver, thermal and temporal stabilization of the LED intensity and color, communication with a camera and personal computer or embedded system, and practicalities of implementing a large number of independent drive channels. Finally, a practical solution addressing all of the abovementioned issues is proposed with the aim of providing a flexible and highly stable smart LED light source driver for state-of-the-art machine vision systems.

  12. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  13. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  14. Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    1987-01-01

    The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reasoning. The third issue is hardware that supports symbolic processing in conjunction with standard data I/O and numeric processing. A SSP that currently is seen to be technologically feasible and is being developed is described and used as a baseline in the design concept.

  15. Application of an autonomous landing guidance system for civil and military aircraft

    NASA Astrophysics Data System (ADS)

    Franklin, Michael R.

    1995-06-01

    The ever increasing demand in the airline industry to reduce the costs associated with weather- related flight delays and cancellations has resulted in the need to be able to land an aircraft in low visibility. This has influenced research in recent years in the development of enhanced vision systems which allow all-weather operations, by providing both visual cues to the pilot and an independent integrity monitor. This research has focused on providing aircraft users with both enhanced performance and a cost effective landing solution with less dependence on ground systems, and has interested both the military and civil aircraft operator communities. The Autonomous Landing Guidance (ALG) system provides the capability to land in low visibility by displaying to the pilot an image of the real world without the need for an onboard Category II or III (CAT II/III) autoload system and without the associated ground facilities normally required. Besides the inherent advantage of saving the cost of expensive installations at airports, ALG also has the effect of inevitably solving the airport capacity problem, weather-related delays and diversions, and airport closures. Low visibility conditions typically cause the complete shutdown of smaller regional airports and reduces the availability of runways at major hubs, which creates a capacity problem to airlines.

  16. Autonomous control systems: applications to remote sensing and image processing

    NASA Astrophysics Data System (ADS)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  17. [A Meridian Visualization System Based on Impedance and Binocular Vision].

    PubMed

    Su, Qiyan; Chen, Xin

    2015-03-01

    To ensure the meridian can be measured and displayed correctly on the human body surface, a visualization method based on impedance and binocular vision is proposed. First of all, using alternating constant current source to inject current signal into the human skin surface, then according to the low impedance characteristics of meridian, the multi-channel detecting instrument detects voltage of each pair of electrodes, thereby obtaining the channel of the meridian location, through the serial port communication, data is transmitted to the host computer. Secondly, intrinsic and extrinsic parameters of cameras are obtained by Zhang's camera calibration method, and 3D information of meridian location is got by corner selection and matching of the optical target, and then transform coordinate of 3D information according to the binocular vision principle. Finally, using curve fitting and image fusion technology realizes the meridian visualization. The test results show that the system can realize real-time detection and accurate display of meridian. PMID:26524777

  18. Scene segmentation in a machine vision system for histopathology

    NASA Astrophysics Data System (ADS)

    Thompson, Deborah B.; Bartels, H. G.; Haddad, J. W.; Bartels, Peter H.

    1990-07-01

    Algorithms and procedures employed to attain reliable and exhaustive segmentation in histopathologic imagery of colon and prostate sections are detailed. The algorithms are controlled and selectively called by a scene segmentation expert system as part of a machine vision system for the diagnostic interpretation of histopathologic sections. At this time, effective segmentation of scenes of glandular tissues is produced, with the system being conservative in the identification of glands; for the segmentation of overlapping glandular nuclei an overall success rate of approximately 90% has been achieved.

  19. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  20. Autonomic nervous system dysfunction: implication in sickle cell disease.

    PubMed

    Connes, Philippe; Coates, Thomas D

    2013-03-01

    Sickle cell disease is an inherited hemoglobinopathy caused by a single amino acid substitution in the β chain of hemoglobin that causes the hemoglobin to polymerize in the deoxy state. The resulting rigid, sickle-shaped red cells obstruct blood flow causing hemolytic anemia, tissue damage, and premature death. Hemolysis is continual. However, acute exacerbations of sickling called vaso-occlusive crises (VOC) resulting in severe pain occur, often requiring hospitalization. Blood rheology, adhesion of cellular elements of blood to vascular endothelium, inflammation, and activation of coagulation decrease microvascular flow and increase likelihood of VOC. What triggers the transition from steady state to VOC is unknown. This review discusses the interaction of blood rheological factors and the role that autonomic nervous system (ANS) induced vasoconstriction may have in triggering crisis as well as the mechanism of ANS dysfunction in SCD. PMID:23643396

  1. Autonomous satellite navigation methods using the Global Positioning Satellite System

    NASA Technical Reports Server (NTRS)

    Murata, M.; Tapley, B. D.; Schutz, B. E.

    1982-01-01

    This investigation considers the problem of autonomous satellite navigation using the NAVSTAR Global Positioning System (GPS). The major topics covered include the design, implementation, and validation of onboard navigation filter algorithms by means of computer simulations. The primary errors that the navigation filter design must minimize are computational effects and modeling inaccuracies due to limited capability of the onboard computer. The minimization of the effect of these errors is attained by applying the sequential extended Kalman filter using a factored covariance implementation with Q-matrix or dynamical model compensations. Peformance evaluation of the navigation filter design is carried out using both the CDC Cyber 170/750 computer and the PDP-11/60 computer. The results are obtained assuming the Phase I GPS constellation, consisting of six satellites, and a Landsat-D type spacecraft as the model for the user satellite orbit.

  2. The Montana ALE (Autonomous Lunar Excavator) Systems Engineering Report

    NASA Technical Reports Server (NTRS)

    Hull, Bethanne J.

    2012-01-01

    On May 2 1-26, 20 12, the third annual NASA Lunabotics Mining Competition will be held at the Kennedy Space Center in Florida. This event brings together student teams from universities around the world to compete in an engineering challenge. Each team must design, build and operate a robotic excavator that can collect artificial lunar soil and deposit it at a target location. Montana State University, Bozeman, is one of the institutions selected to field a team this year. This paper will summarize the goals of MSU's lunar excavator project, known as the Autonomous Lunar Explorer (ALE), along with the engineering process that the MSU team is using to fulfill these goals, according to NASA's systems engineering guidelines.

  3. Existence and continuation of periodic solutions of autonomous Newtonian systems

    NASA Astrophysics Data System (ADS)

    Fura, Justyna; Ratajczak, Anna; Rybicki, Sławomir

    In this article, we study the existence and the continuation of periodic solutions of autonomous Newtonian systems. To prove the results we apply the infinite-dimensional version of the degree for SO(2)-equivariant gradient operators defined by the third author in Nonlinear Anal. Theory Methods Appl. 23(1) (1994) 83-102 and developed in Topol. Meth. Nonlinear Anal. 9(2) (1997) 383-417. Using the results due to Rabier [Symmetries, Topological degree and a Theorem of Z.Q. Wang, J. Math. 24(3) (1994) 1087-1115] and Wang [Symmetries and calculation of the degree, Chinese Ann. Math. 10 (1989) 520-536] we show that the Leray-Schauder degree is not applicable in the proofs of our theorems, because it vanishes.

  4. Challenges in verification and validation of autonomous systems for space exploration

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Jonsson, Ari

    2005-01-01

    Space exploration applications offer a unique opportunity for the development and deployment of autonomous systems, due to limited communications, large distances, and great expense of direct operation. At the same time, the risk and cost of space missions leads to reluctance to taking on new, complex and difficult-to-understand technology. A key issue in addressing these concerns is the validation of autonomous systems. In recent years, higher-level autonomous systems have been applied in space applications. In this presentation, we will highlight those autonomous systems, and discuss issues in validating these systems. We will then look to future demands on validating autonomous systems for space, identify promising technologies and open issues.

  5. Autonomous self-powered structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Qing, Xinlin P.; Anton, Steven R.; Zhang, David; Kumar, Amrita; Inman, Daniel J.; Ooi, Teng K.

    2010-03-01

    Structural health monitoring technology is perceived as a revolutionary method of determining the integrity of structures involving the use of multidisciplinary fields including sensors, materials, system integration, signal processing and interpretation. The core of the technology is the development of self-sufficient systems for the continuous monitoring, inspection and damage detection of structures with minimal labor involvement. A major drawback of the existing technology for real-time structural health monitoring is the requirement for external electrical power input. For some applications, such as missiles or combat vehicles in the field, this factor can drastically limit the use of the technology. Having an on-board electrical power source that is independent of the vehicle power system can greatly enhance the SHM system and make it a completely self-contained system. In this paper, using the SMART layer technology as a basis, an Autonomous Self-powered (ASP) Structural Health Monitoring (SHM) system has been developed to solve the major challenge facing the transition of SHM systems into field applications. The architecture of the self-powered SHM system was first designed. There are four major components included in the SHM system: SMART Layer with sensor network, low power consumption diagnostic hardware, rechargeable battery with energy harvesting device, and host computer with supporting software. A prototype of the integrated self-powered active SHM system was built for performance and functionality testing. Results from the evaluation tests demonstrated that a fully charged battery system is capable of powering the SHM system for active scanning up to 10 hours.

  6. ANTS: Exploring the Solar System with an Autonomous Nanotechnology Swarm

    NASA Technical Reports Server (NTRS)

    Clark, P. E.; Curtis, S.; Rilee, M.; Truszkowski, W.; Marr, G.

    2002-01-01

    ANTS (Autonomous Nano-Technology Swarm), a NASA advanced mission concept, calls for a large (1000 member) swarm of pico-class (1 kg) totally autonomous spacecraft to prospect the asteroid belt. Additional information is contained in the original extended abstract.

  7. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  8. 75 FR 38863 - Tenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA.../Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a... Systems/Synthetic Vision Systems (EFVS/SVS) meeting. The agenda will include: Tuesday, 27 July...

  9. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  10. R-MASTIF: robotic mobile autonomous system for threat interrogation and object fetch

    NASA Astrophysics Data System (ADS)

    Das, Aveek; Thakur, Dinesh; Keller, James; Kuthirummal, Sujit; Kira, Zsolt; Pivtoraiko, Mihail

    2013-01-01

    Autonomous robotic "fetch" operation, where a robot is shown a novel object and then asked to locate it in the field, re- trieve it and bring it back to the human operator, is a challenging problem that is of interest to the military. The CANINE competition presented a forum for several research teams to tackle this challenge using state of the art in robotics technol- ogy. The SRI-UPenn team fielded a modified Segway RMP 200 robot with multiple cameras and lidars. We implemented a unique computer vision based approach for textureless colored object training and detection to robustly locate previ- ously unseen objects out to 15 meters on moderately flat terrain. We integrated SRI's state of the art Visual Odometry for GPS-denied localization on our robot platform. We also designed a unique scooping mechanism which allowed retrieval of up to basketball sized objects with a reciprocating four-bar linkage mechanism. Further, all software, including a novel target localization and exploration algorithm was developed using ROS (Robot Operating System) which is open source and well adopted by the robotics community. We present a description of the system, our key technical contributions and experimental results.

  11. Laser rangefinders for autonomous intelligent cruise control systems

    NASA Astrophysics Data System (ADS)

    Journet, Bernard A.; Bazin, Gaelle

    1998-01-01

    THe purpose of this paper is to show to what kind of application laser range-finders can be used inside Autonomous Intelligent Cruise Control systems. Even if laser systems present good performances the safety and technical considerations are very restrictive. As the system is used in the outside, the emitted average output power must respect the rather low level of 1A class. Obstacle detection or collision avoidance require a 200 meters range. Moreover bad weather conditions, like rain or fog, ar disastrous. We have conducted measurements on laser rangefinder using different targets and at different distances. We can infer that except for cooperative targets low power laser rangefinder are not powerful enough for long distance measurement. Radars, like 77 GHz systems, are better adapted to such cases. But in case of short distances measurement, range around 10 meters, with a minimum distance around twenty centimeters, laser rangefinders are really useful with good resolution and rather low cost. Applications can have the following of white lines on the road, the target being easily cooperative, detection of vehicles in the vicinity, that means car convoy traffic control or parking assistance, the target surface being indifferent at short distances.

  12. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  13. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Hugli, Heinz

    2000-03-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits to accommodate a broad range of inspection requirements, is however limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented, considering the particular case of the visual inspection of markings found on top of molded integrated circuits. These results show the effectiveness of the presented objective functions and search methods, and validate the configuration assistant as well.

  14. Research on machine vision system of monitoring injection molding processing

    NASA Astrophysics Data System (ADS)

    Bai, Fan; Zheng, Huifeng; Wang, Yuebing; Wang, Cheng; Liao, Si'an

    2016-01-01

    With the wide development of injection molding process, the embedded monitoring system based on machine vision has been developed to automatically monitoring abnormality of injection molding processing. First, the construction of hardware system and embedded software system were designed. Then camera calibration was carried on to establish the accurate model of the camera to correct distortion. Next the segmentation algorithm was applied to extract the monitored objects of the injection molding process system. The realization procedure of system included the initialization, process monitoring and product detail detection. Finally the experiment results were analyzed including the detection rate of kinds of the abnormality. The system could realize the multi-zone monitoring and product detail detection of injection molding process with high accuracy and good stability.

  15. Autonomous perception and decision making in cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Sarkar, Soumik

    2011-07-01

    The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and

  16. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem

  17. Lagrangian-Hamiltonian unified formalism for autonomous higher order dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2011-09-01

    The Lagrangian-Hamiltonian unified formalism of Skinner and Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, as well as for first-order and higher order field theories. However, a complete generalization to higher order mechanical systems is yet to be described. In this work, after reviewing the natural geometrical setting and the Lagrangian and Hamiltonian formalisms for higher order autonomous mechanical systems, we develop a complete generalization of the Lagrangian-Hamiltonian unified formalism for these kinds of systems, and we use it to analyze some physical models from this new point of view.

  18. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a meeting of Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight...

  19. An Approach to Autonomous Control for Space Nuclear Power Systems

    SciTech Connect

    Wood, Richard Thomas; Upadhyaya, Belle R.

    2011-01-01

    Under Project Prometheus, the National Aeronautics and Space Administration (NASA) investigated deep space missions that would utilize space nuclear power systems (SNPSs) to provide energy for propulsion and spacecraft power. The initial study involved the Jupiter Icy Moons Orbiter (JIMO), which was proposed to conduct in-depth studies of three Jovian moons. Current radioisotope thermoelectric generator (RTG) and solar power systems cannot meet expected mission power demands, which include propulsion, scientific instrument packages, and communications. Historically, RTGs have provided long-lived, highly reliable, low-power-level systems. Solar power systems can provide much greater levels of power, but power density levels decrease dramatically at {approx} 1.5 astronomical units (AU) and beyond. Alternatively, an SNPS can supply high-sustained power for space applications that is both reliable and mass efficient. Terrestrial nuclear reactors employ varying degrees of human control and decision-making for operations and benefit from periodic human interaction for maintenance. In contrast, the control system of an SNPS must be able to provide continuous operatio for the mission duration with limited immediate human interaction and no opportunity for hardware maintenance or sensor calibration. In effect, the SNPS control system must be able to independently operate the power plant while maintaining power production even when subject to off-normal events and component failure. This capability is critical because it will not be possible to rely upon continuous, immediate human interaction for control due to communications delays and periods of planetary occlusion. In addition, uncertainties, rare events, and component degradation combine with the aforementioned inaccessibility and unattended operation to pose unique challenges that an SNPS control system must accommodate. Autonomous control is needed to address these challenges and optimize the reactor control design.

  20. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  1. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  2. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  3. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    SciTech Connect

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  4. MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring

    NASA Astrophysics Data System (ADS)

    Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio

    1995-01-01

    In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.

  5. DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems

    SciTech Connect

    Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.; Fink, Glenn A.; Bakken, David E.

    2011-02-01

    For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing the autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.

  6. Beam Splitter For Welding-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.

    1991-01-01

    Compact welding torch equipped with along-the-torch vision system includes cubic beam splitter to direct preview light on weldment and to reflect light coming from welding scene for imaging. Beam splitter integral with torch; requires no external mounting brackets. Rugged and withstands vibrations and wide range of temperatures. Commercially available, reasonably priced, comes in variety of sizes and optical qualities with antireflection and interference-filter coatings on desired faces. Can provide 50 percent transmission and 50 percent reflection of incident light to exhibit minimal ghosting of image.

  7. Lost among the trees? The autonomic nervous system and paediatrics.

    PubMed

    Rees, Corinne A

    2014-06-01

    The autonomic nervous system (ANS) has been strikingly neglected in Western medicine. Despite its profound importance for regulation, adjustment and coordination of body systems, it lacks priority in training and practice and receives scant attention in numerous major textbooks. The ANS is integral to manifestations of illness, underlying familiar physical and psychological symptoms. When ANS activity is itself dysfunctional, usual indicators of acute illness may prove deceptive. Recognising the relevance of the ANS can involve seeing the familiar through fresh eyes, challenging assumptions in clinical assessment and in approaches to practice. Its importance extends from physical and psychological well-being to parenting and safeguarding, public services and the functioning of society. Exploration of its role in conditions ranging from neurological, gastrointestinal and connective tissue disorders, diabetes and chronic fatigue syndrome, to autism, behavioural and mental health difficulties may open therapeutic avenues. The ANS offers a mechanism for so-called functional illnesses and illustrates the importance of recognising that 'stress' takes many forms, physical, psychological and environmental, desirable and otherwise. Evidence of intrauterine and post-natal programming of ANS reactivity suggests that neonatal care and safeguarding practice may offer preventive opportunity, as may greater understanding of epigenetic change of ANS activity through, for example, accidental or psychological trauma or infection. The aim of this article is to accelerate recognition of the importance of the ANS throughout paediatrics, and of the potential physical and psychological cost of neglecting it.

  8. Autonomous mine detection system (AMDS) neutralization payload module

    NASA Astrophysics Data System (ADS)

    Majerus, M.; Vanaman, R.; Wright, N.

    2010-04-01

    The Autonomous Mine Detection System (AMDS) program is developing a landmine and explosive hazards standoff detection, marking, and neutralization system for dismounted soldiers. The AMDS Capabilities Development Document (CDD) has identified the requirement to deploy three payload modules for small robotic platforms: mine detection and marking, explosives detection and marking, and neutralization. This paper addresses the neutralization payload module. There are a number of challenges that must be overcome for the neutralization payload module to be successfully integrated into AMDS. The neutralizer must meet stringent size, weight, and power (SWaP) requirements to be compatible with a small robot. The neutralizer must be effective against a broad threat, to include metal and plastic-cased Anti-Personnel (AP) and Anti-Tank (AT) landmines, explosive devices, and Unexploded Explosive Ordnance (UXO.) It must adapt to a variety of threat concealments, overburdens, and emplacement methods, to include soil, gravel, asphalt, and concrete. A unique neutralization technology is being investigated for adaptation to the AMDS Neutralization Module. This paper will describe review this technology and how the other two payload modules influence its design for minimizing SWaP. Recent modeling and experimental efforts will be included.

  9. Systems and methods for autonomously controlling agricultural machinery

    DOEpatents

    Hoskinson, Reed L.; Bingham, Dennis N.; Svoboda, John M.; Hess, J. Richard

    2003-07-08

    Systems and methods for autonomously controlling agricultural machinery such as a grain combine. The operation components of a combine that function to harvest the grain have characteristics that are measured by sensors. For example, the combine speed, the fan speed, and the like can be measured. An important sensor is the grain loss sensor, which may be used to quantify the amount of grain expelled out of the combine. The grain loss sensor utilizes the fluorescence properties of the grain kernels and the plant residue to identify when the expelled plant material contains grain kernels. The sensor data, in combination with historical and current data stored in a database, is used to identify optimum operating conditions that will result in increased crop yield. After the optimum operating conditions are identified, an on-board computer can generate control signals that will adjust the operation of the components identified in the optimum operating conditions. The changes result in less grain loss and improved grain yield. Also, because new data is continually generated by the sensor, the system has the ability to continually learn such that the efficiency of the agricultural machinery is continually improved.

  10. Lost among the trees? The autonomic nervous system and paediatrics.

    PubMed

    Rees, Corinne A

    2014-06-01

    The autonomic nervous system (ANS) has been strikingly neglected in Western medicine. Despite its profound importance for regulation, adjustment and coordination of body systems, it lacks priority in training and practice and receives scant attention in numerous major textbooks. The ANS is integral to manifestations of illness, underlying familiar physical and psychological symptoms. When ANS activity is itself dysfunctional, usual indicators of acute illness may prove deceptive. Recognising the relevance of the ANS can involve seeing the familiar through fresh eyes, challenging assumptions in clinical assessment and in approaches to practice. Its importance extends from physical and psychological well-being to parenting and safeguarding, public services and the functioning of society. Exploration of its role in conditions ranging from neurological, gastrointestinal and connective tissue disorders, diabetes and chronic fatigue syndrome, to autism, behavioural and mental health difficulties may open therapeutic avenues. The ANS offers a mechanism for so-called functional illnesses and illustrates the importance of recognising that 'stress' takes many forms, physical, psychological and environmental, desirable and otherwise. Evidence of intrauterine and post-natal programming of ANS reactivity suggests that neonatal care and safeguarding practice may offer preventive opportunity, as may greater understanding of epigenetic change of ANS activity through, for example, accidental or psychological trauma or infection. The aim of this article is to accelerate recognition of the importance of the ANS throughout paediatrics, and of the potential physical and psychological cost of neglecting it. PMID:24573884

  11. Minimal brain dysfunction, stimulant drugs, and autonomic nervous system activity.

    PubMed

    Zahn, T P; Abate, F; Little, B C; Wender, P H

    1975-03-01

    Autonomic base levels and responsivity to stimuli were investigated in normal and minimally brain dysfunctioned (MBD) children. Continuous recordings of skin conductance, heart rate, skin temperature, and respiration rate were made during rest, at presentation of tones, and when performing a reaction time task. No significant differences in base levels were obtained between normal and MBD children when not taking drugs, but stimulant medication increased skin conductance and heart rate and decreased skin temperature and reaction time. The MBD children were less reactive, autonomically, to all types of stimuli. Stimulant drugs decreased electrodermal responsivity, which was predictable from concurrent changes in base line skin conductance and skintemperature. The MBD performance deficits are not related to lower autonomic responsivity or lower absolute base levels of arousal, but MBD children may perform better at relatively high autonomic base levels.

  12. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  13. An active vision system for multitarget surveillance in dynamic environments.

    PubMed

    Bakhtari, Ardevan; Benhabib, Beno

    2007-02-01

    This paper presents a novel agent-based method for the dynamic coordinated selection and positioning of active-vision cameras for the simultaneous surveillance of multiple objects-of-interest as they travel through a cluttered environment with a-priori unknown trajectories. The proposed system dynamically adjusts not only the orientation but also the position of the cameras in order to maximize the system's performance by avoiding occlusions and acquiring images with preferred viewing angles. Sensor selection and positioning are accomplished through an agent-based approach. The proposed sensing-system reconfiguration strategy has been verified via simulations and implemented on an experimental prototype setup for automated facial recognition. Both simulations and experimental analyses have shown that the use of dynamic sensors along with an effective online dispatching strategy may tangibly improve the surveillance performance of a sensing system.

  14. Computer-aided 3D display system and its application in 3D vision test

    NASA Astrophysics Data System (ADS)

    Shen, XiaoYun; Ma, Lan; Hou, Chunping; Wang, Jiening; Tang, Da; Li, Chang

    1998-08-01

    The computer aided 3D display system, flicker-free field sequential stereoscopic image display system, is newly developed. This system is composed of personal computer, liquid crystal glasses driving card, stereoscopic display software and liquid crystal glasses. It can display field sequential stereoscopic images at refresh rate of 70 Hz to 120 Hz. A typical application of this system, 3D vision test system, is mainly discussed in this paper. This stereoscopic vision test system can test stereoscopic acuity, cross disparity, uncross disparity and dynamic stereoscopic vision quantitatively. We have taken the use of random-dot- stereograms as stereoscopic vision test charts. Through practical test experiment between Anaglyph Stereoscopic Vision Test Charts and this stereoscopic vision test system, the statistical figures and test result is given out.

  15. 75 FR 28852 - Ninth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA.../Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a meeting of Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

  16. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  17. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    PubMed

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  18. Autonomic nervous system pulmonary vasoregulation after hypoperfusion in conscious dogs.

    PubMed

    Clougherty, P W; Nyhan, D P; Chen, B B; Goll, H M; Murray, P A

    1988-05-01

    We investigated the role of the autonomic nervous system (ANS) in the pulmonary vascular response to increasing cardiac index after a period of hypoperfusion (defined as reperfusion) in conscious dogs. Base-line and reperfusion pulmonary vascular pressure-cardiac index (P/Q) plots were generated by stepwise constriction and release, respectively, of an inferior vena caval occluder to vary Q. Surprisingly, after 10-15 min of hypoperfusion (Q decreased from 139 +/- 9 to 46 +/- 3 ml.min-1.kg-1), the pulmonary vascular pressure gradient (pulmonary arterial pressure-pulmonary capillary wedge pressure) was unchanged over a broad range of Q during reperfusion compared with base line when the ANS was intact. In contrast, pulmonary vasoconstriction was observed during reperfusion after combined sympathetic beta-adrenergic and cholinergic receptor block, after beta-block alone, but not after cholinergic block alone. The pulmonary vasoconstriction during reperfusion was entirely abolished by combined sympathetic alpha- and beta-block. Although sympathetic alpha-block alone caused pulmonary vasodilation compared with the intact, base-line P/Q relationship, no further vasodilation was observed during reperfusion. Thus the ANS actively regulates the pulmonary circulation during reperfusion in conscious dogs. With the ANS intact, sympathetic beta-adrenergic vasodilation offsets alpha-adrenergic vasoconstriction and prevents pulmonary vasoconstriction during reperfusion. PMID:2896465

  19. Fuzzy Logic Based Autonomous Parallel Parking System with Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Panomruttanarug, Benjamas; Higuchi, Kohji

    This paper presents an emulation of fuzzy logic control schemes for an autonomous parallel parking system in a backward maneuver. There are four infrared sensors sending the distance data to a microcontroller for generating an obstacle-free parking path. Two of them mounted on the front and rear wheels on the parking side are used as the inputs to the fuzzy rules to calculate a proper steering angle while backing. The other two attached to the front and rear ends serve for avoiding collision with other cars along the parking space. At the end of parking processes, the vehicle will be in line with other parked cars and positioned in the middle of the free space. Fuzzy rules are designed based upon a wall following process. Performance of the infrared sensors is improved using Kalman filtering. The design method needs extra information from ultrasonic sensors. Starting from modeling the ultrasonic sensor in 1-D state space forms, one makes use of the infrared sensor as a measurement to update the predicted values. Experimental results demonstrate the effectiveness of sensor improvement.

  20. Abnormally Malicious Autonomous Systems and their Internet Connectivity

    SciTech Connect

    Shue, Craig A; Kalafut, Prof. Andrew; Gupta, Prof. Minaxi

    2011-01-01

    While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using ten popular blacklists, plus local spam data, and extensive DNS resolutions based on the contents of the blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted. Yet others account for large fractions of blacklisted IP addresses. Several ASes regularly peer with ASes associated with significant malicious activity. We also find that malicious ASes as a whole differ from benign ones in other properties not obviously related to their malicious activities, such as more frequent connectivity changes with their BGP peers. Overall, we conclude that examining malicious activity at AS granularity can unearth networks with lax security or those that harbor cybercrime.