Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft
2017-06-01
International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing
Sustainable and Autonomic Space Exploration Missions
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter
2006-01-01
Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
New vision system and navigation algorithm for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.
2013-12-01
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles
NASA Technical Reports Server (NTRS)
Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick
2012-01-01
Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.
Draper Laboratory small autonomous aerial vehicle
NASA Astrophysics Data System (ADS)
DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.
1997-06-01
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
The use of multisensor data for robotic applications
NASA Technical Reports Server (NTRS)
Abidi, M. A.; Gonzalez, R. C.
1990-01-01
The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system were performed. Specifically, vision-driven techniques were implemented to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system was also equipped with a force/torque sensor that continuously monitored the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the two experiments were completed successfully and autonomously.
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision
NASA Astrophysics Data System (ADS)
Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire
2011-06-01
Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.
Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision
NASA Technical Reports Server (NTRS)
Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire
2011-01-01
Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.
Mechanics of Multifunctional Materials & Microsystems
2012-03-09
Mechanics of Materials; Life Prediction (Materials & Micro-devices); Sensing, Precognition & Diagnosis; Multifunctional Design of Autonomic...Life Prediction (Materials & Micro-devices); Sensing, Precognition & Diagnosis; Multifunctional Design of Autonomic Systems; Multifunctional...release; distribution is unlimited. 7 VISION: EXPANDED • site specific • autonomic AUTONOMIC AEROSPACE STRUCTURES • Sensing & Precognition • Self
An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing
NASA Astrophysics Data System (ADS)
Zhao, Yunji; Pei, Hailong
In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.
NASA Astrophysics Data System (ADS)
Cross, Jack; Schneider, John; Cariani, Pete
2013-05-01
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.
Autonomic Computing: Panacea or Poppycock?
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2005-01-01
Autonomic Computing arose out of a need for a means to cope with rapidly growing complexity of integrating, managing, and operating computer-based systems as well as a need to reduce the total cost of ownership of today's systems. Autonomic Computing (AC) as a discipline was proposed by IBM in 2001, with the vision to develop self-managing systems. As the name implies, the influence for the new paradigm is the human body's autonomic system, which regulates vital bodily functions such as the control of heart rate, the body's temperature and blood flow-all without conscious effort. The vision is to create selfivare through self-* properties. The initial set of properties, in terms of objectives, were self-configuring, self-healing, self-optimizing and self-protecting, along with attributes of self-awareness, self-monitoring and self-adjusting. This self-* list has grown: self-anticipating, self-critical, self-defining, self-destructing, self-diagnosis, self-governing, self-organized, self-reflecting, and self-simulation, for instance.
A trunk ranging system based on binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Xixuan; Kan, Jiangming
2017-07-01
Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
Johnson, Michael P; Abrams, Sandra L
2005-10-01
As a part of the American Physical Therapy Association's (APTA) vision statement, by the year 2020, physical therapists "will hold all privileges of autonomous practice." This vision statement and the ideals held within it are elemental to the direction of our continued growth as a profession. Many members and nonmembers, however, appear confused and perhaps even intimidated by the concept of autonomous practice. This paper will review and discuss the processes used by other health care professions to gain autonomy within the US health care system. In particular, the processes used by physicians, which were extremely effective and have been used as a template by many other health professions, including physical therapy. Further discussion will focus on the physical therapy profession, emphasizing the parallels with medicine and considering many issues relevant to the goal of autonomous practice. By understanding the past and considering the present, readers will develop an appreciation of (1) the foundation for autonomous practice in health care, (2) the vision of the APTA and why the profession is well positioned to achieve this vision, and (3) the factors we need to consider to hold (and maintain) all privileges of autonomous practice.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Autonomous docking system for space structures and satellites
NASA Astrophysics Data System (ADS)
Prasad, Guru; Tajudeen, Eddie; Spenser, James
2005-05-01
Aximetric proposes Distributed Command and Control (C2) architecture for autonomous on-orbit assembly in space with our unique vision and sensor driven docking mechanism. Aximetric is currently working on ip based distributed control strategies, docking/mating plate, alignment and latching mechanism, umbilical structure/cord designs, and hardware/software in a closed loop architecture for smart autonomous demonstration utilizing proven developments in sensor and docking technology. These technologies can be effectively applied to many transferring/conveying and on-orbit servicing applications to include the capturing and coupling of space bound vehicles and components. The autonomous system will be a "smart" system that will incorporate a vision system used for identifying, tracking, locating and mating the transferring device to the receiving device. A robustly designed coupler for the transfer of the fuel will be integrated. Advanced sealing technology will be utilized for isolation and purging of resulting cavities from the mating process and/or from the incorporation of other electrical and data acquisition devices used as part of the overall smart system.
Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.
1985-07-01
AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.
A robotic vision system to measure tree traits
USDA-ARS?s Scientific Manuscript database
The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Design of a Vision-Based Sensor for Autonomous Pig House Cleaning
NASA Astrophysics Data System (ADS)
Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael
2005-12-01
Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.
Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks
NASA Astrophysics Data System (ADS)
DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.
2017-03-01
By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
The "Biologically-Inspired Computing" Column
NASA Technical Reports Server (NTRS)
Hinchey, Mike
2007-01-01
Self-managing systems, whether viewed from the perspective of Autonomic Computing, or from that of another initiative, offers a holistic vision for the development and evolution of biologically-inspired computer-based systems. It aims to bring new levels of automation and dependability to systems, while simultaneously hiding their complexity and reducing costs. A case can certainly be made that all computer-based systems should exhibit autonomic properties [6], and we envisage greater interest in, and uptake of, autonomic principles in future system development.
NASA Technical Reports Server (NTRS)
Nechyba, Michael C.; Ettinger, Scott M.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Recently substantial progress has been made towards design building and testifying remotely piloted Micro Air Vehicles (MAVs). This progress in overcoming the aerodynamic obstacles to flight at very small scales has, unfortunately, not been matched by similar progress in autonomous MAV flight. Thus, we propose a robust, vision-based horizon detection algorithm as the first step towards autonomous MAVs. In this paper, we first motivate the use of computer vision for the horizon detection task by examining the flight of birds (biological MAVs) and considering other practical factors. We then describe our vision-based horizon detection algorithm, which has been demonstrated at 30 Hz with over 99.9% correct horizon identification, over terrain that includes roads, buildings large and small, meadows, wooded areas, and a lake. We conclude with some sample horizon detection results and preview a companion paper, where the work discussed here forms the core of a complete autonomous flight stability system.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
Connected and autonomous vehicles 2040 vision.
DOT National Transportation Integrated Search
2014-07-01
The Pennsylvania Department of Transportation (PennDOT) commissioned a one-year project, Connected and Autonomous : Vehicles 2040 Vision, with researchers at Carnegie Mellon University (CMU) to assess the implications of connected and : autonomous ve...
2017-06-01
FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and
Landmark navigation and autonomous landing approach with obstacle detection for aircraft
NASA Astrophysics Data System (ADS)
Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.
1997-06-01
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
The Pixhawk Open-Source Computer Vision Framework for Mavs
NASA Astrophysics Data System (ADS)
Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.
2011-09-01
Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter
NASA Technical Reports Server (NTRS)
Rock, Stephen M.
1999-01-01
This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.
Improving CAR Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Improving Car Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Research Institute for Autonomous Precision Guided Systems
2007-03-08
research on agile autonomous munitions, in direct support of the Air Force Research Laboratory Munitions Directorate (AFRL/MN). The grant was awarded with a...Flight had (5) research task areas: 1. Aeroforms and Actuation for Small and Micro Agile Air Vehicles 2. Sensing for Autonomous Control and...critical barriers in AAM, but are not covered in the scope of the AVCAAF (Vision-Based Control of Agile, Autonomous Micro Air Vehicles and Small UAVs
A Vision in Jeopardy: Royal Navy Maritime Autonomous Systems (MAS)
2017-03-31
Chapter 6 will propose a new MAS vision for the RN. However, before doing so, a fresh look at the problem is required. Consensus of the Problem, Not the... assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in 2014. Yet, the vision...continuous investment and assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in
NASA Astrophysics Data System (ADS)
Crawford, Bobby Grant
In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.
Square tracking sensor for autonomous helicopter hover stabilization
NASA Astrophysics Data System (ADS)
Oertel, Carl-Henrik
1995-06-01
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Design of a dynamic test platform for autonomous robot vision systems
NASA Technical Reports Server (NTRS)
Rich, G. C.
1980-01-01
The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.
Semi-autonomous unmanned ground vehicle control system
NASA Astrophysics Data System (ADS)
Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James
2006-05-01
Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.
3-D Vision Techniques for Autonomous Vehicles
1988-08-01
TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.
Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo
2016-09-14
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications
Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo
2016-01-01
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178
Integrated 3-D vision system for autonomous vehicles
NASA Astrophysics Data System (ADS)
Hou, Kun M.; Shawky, Mohamed; Tu, Xiaowei
1992-03-01
Nowadays, autonomous vehicles have become a multidiscipline field. Its evolution is taking advantage of the recent technological progress in computer architectures. As the development tools became more sophisticated, the trend is being more specialized, or even dedicated architectures. In this paper, we will focus our interest on a parallel vision subsystem integrated in the overall system architecture. The system modules work in parallel, communicating through a hierarchical blackboard, an extension of the 'tuple space' from LINDA concepts, where they may exchange data or synchronization messages. The general purpose processing elements are of different skills, built around 40 MHz i860 Intel RISC processors for high level processing and pipelined systolic array processors based on PLAs or FPGAs for low-level processing.
SAMURAI: Polar AUV-Based Autonomous Dexterous Sampling
NASA Astrophysics Data System (ADS)
Akin, D. L.; Roberts, B. J.; Smith, W.; Roderick, S.; Reves-Sohn, R.; Singh, H.
2006-12-01
While autonomous undersea vehicles are increasingly being used for surveying and mapping missions, as of yet there has been little concerted effort to create a system capable of performing physical sampling or other manipulation of the local environment. This type of activity has typically been performed under teleoperated control from ROVs, which provides high-bandwidth real-time human direction of the manipulation activities. Manipulation from an AUV will require a completely autonomous sampling system, which implies both advanced technologies such as machine vision and autonomous target designation, but also dexterous robot manipulators to perform the actual sampling without human intervention. As part of the NASA Astrobiology Science and Technology for Exploring the Planets (ASTEP) program, the University of Maryland Space Systems Laboratory has been adapting and extending robotics technologies developed for spacecraft assembly and maintenance to the problem of autonomous sampling of biologicals and soil samples around hydrothermal vents. The Sub-polar ice Advanced Manipulator for Universal Sampling and Autonomous Intervention (SAMURAI) system is comprised of a 6000-meter capable six-degree-of-freedom dexterous manipulator, along with an autonomous vision system, multi-level control system, and sampling end effectors and storage mechanisms to allow collection of samples from vent fields. SAMURAI will be integrated onto the Woods Hole Oceanographic Institute (WHOI) Jaguar AUV, and used in Arctic during the fall of 2007 for autonomous vent field sampling on the Gakkel Ridge. Under the current operations concept, the JAGUAR and PUMA AUVs will survey the water column and localize on hydrothermal vents. Early mapping missions will create photomosaics of the vents and local surroundings, allowing scientists on the mission to designate desirable sampling targets. Based on physical characteristics such as size, shape, and coloration, the targets will be loaded into the SAMURAI control system, and JAGUAR (with SAMURAI mounted to the lower forward hull) will return to the designated target areas. Once on site, vehicle control will be turned over to the SAMURAI controller, which will perform vision-based guidance to the sampling site and will then ground the AUV to the sea bottom for stability. The SAMURAI manipulator will collect samples, such as sessile biologicals, geological samples, and (potentially) vent fluids, and store the samples for the return trip. After several hours of sampling operations on one or several sites, JAGUAR control will be returned to the WHOI onboard controller for the return to the support ship. (Operational details of AUV operations on the Gakkel Ridge mission are presented in other papers at this conference.) Between sorties, SAMURAI end effectors can be changed out on the surface for specific targets, such as push cores or larger biologicals such as tube worms. In addition to the obvious challenges in autonomous vision-based manipulator control from a free-flying support vehicle, significant development challenges have been the design of a highly capable robotic arm within the mass limitations (both wet and dry) of the JAGUAR vehicle, the development of a highly robust manipulator with modular maintenance units for extended polar operations, and the creation of a robot-based sample collection and holding system for multiple heterogeneous samples on a single extended sortie.
Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy
2005-01-01
Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.
NASA Astrophysics Data System (ADS)
Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.
Development Of Autonomous Systems
NASA Astrophysics Data System (ADS)
Kanade, Takeo
1989-03-01
In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.
Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.
Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734
2011-02-07
Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains
Intelligent vision system for autonomous vehicle operations
NASA Technical Reports Server (NTRS)
Scholl, Marija S.
1991-01-01
A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.
TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, H; Jenkins, C; Yu, S
Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct formore » camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.« less
A simple, inexpensive, and effective implementation of a vision-guided autonomous robot
NASA Astrophysics Data System (ADS)
Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James
2006-10-01
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
NASA Astrophysics Data System (ADS)
Gao, Shibo; Cheng, Yongmei; Song, Chunhua
2013-09-01
The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.
Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers
Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual
2015-01-01
Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597
Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.
Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual
2015-12-12
Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
Deployable reconnaissance from a VTOL UAS in urban environments
NASA Astrophysics Data System (ADS)
Barnett, Shane; Bird, John; Culhane, Andrew; Sharkasi, Adam; Reinholtz, Charles
2007-04-01
Reconnaissance collection in unknown or hostile environments can be a dangerous and life threatening task. To reduce this risk, the Unmanned Systems Group at Virginia Tech has produced a fully autonomous reconnaissance system able to provide live video reconnaissance from outside and inside unknown structures. This system consists of an autonomous helicopter which launches a small reconnaissance pod inside a building and an operator control unit (OCU) on a ground station. The helicopter is a modified Bergen Industrial Twin using a Rotomotion flight controller and can fly missions of up to one half hour. The mission planning OCU can control the helicopter remotely through teleoperation or fully autonomously by GPS waypoints. A forward facing camera and template matching aid in navigation by identifying the target building. Once the target structure is identified, vision algorithms will center the UAS adjacent to open windows or doorways. Tunable parameters in the vision algorithm account for varying launch distances and opening sizes. Launch of the reconnaissance pod may be initiated remotely through a human in the loop or autonomously. Compressed air propels the half pound stationary pod or the larger mobile pod into the open portals. Once inside the building, the reconnaissance pod will then transmit live video back to the helicopter. The helicopter acts as a repeater node for increased video range and simplification of communication back to the ground station.
The Right Track for Vision Correction
NASA Technical Reports Server (NTRS)
2003-01-01
More and more people are putting away their eyeglasses and contact lenses as a result of laser vision correction surgery. LASIK, the most widely performed version of this surgical procedure, improves vision by reshaping the cornea, the clear front surface of the eye, using an excimer laser. One excimer laser system, Alcon s LADARVision 4000, utilizes a laser radar (LADAR) eye tracking device that gives it unmatched precision. During LASIK surgery, laser During LASIK surgery, laser pulses must be accurately placed to reshape the cornea. A challenge to this procedure is the patient s constant eye movement. A person s eyes make small, involuntary movements known as saccadic movements about 100 times per second. Since the saccadic movements will not stop during LASIK surgery, most excimer laser systems use an eye tracking device that measures the movements and guides the placement of the laser beam. LADARVision s eye tracking device stems from the LADAR technology originally developed through several Small Business Innovation Research (SBIR) contracts with NASA s Johnson Space Center and the U.S. Department of Defense s Ballistic Missile Defense Office (BMDO). In the 1980s, Johnson awarded Autonomous Technologies Corporation a Phase I SBIR contract to develop technology for autonomous rendezvous and docking of space vehicles to service satellites. During Phase II of the Johnson SBIR contract, Autonomous Technologies developed a prototype range and velocity imaging LADAR to demonstrate technology that could be used for this purpose.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Nian, Rui; Liu, Fang; He, Bo
2013-07-16
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).
Nian, Rui; Liu, Fang; He, Bo
2013-01-01
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855
Autonomic dysfunction in pediatric patients with headache: migraine versus tension-type headache.
Rabner, Jonathan; Caruso, Alessandra; Zurakowski, David; Lazdowsky, Lori; LeBel, Alyssa
2016-12-01
To examine symptoms indicating central nervous system (CNS) autonomic dysfunction in pediatric patients with migraine and tension-type headache. A retrospective chart review assessed six symptoms (i.e. constipation, insomnia, dizziness, blurry vision, abnormal blood pressure, and cold and clammy palms and soles) indicating central nervous system (CNS) autonomic dysfunction in 231 patients, ages 5-18 years, diagnosed with migraine, tension-type headache (TTH), or Idiopathic Scoliosis (IS). Higher frequencies of "insomnia," "dizziness," and "cold and clammy palms and soles" were found for both migraine and TTH patients compared to the IS control group (P < 0.001). Frequencies of all six symptoms were greater in TTH than migraine patients with "cold and clammy palms and soles" reaching significance (P < 0.001). The need for prospective research investigating autonomic dysfunction in pediatric headache patients is discussed.
Experimental results in autonomous landing approaches by dynamic machine vision
NASA Astrophysics Data System (ADS)
Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.
1994-07-01
The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
NASA Astrophysics Data System (ADS)
Hofmann, Ulrich; Siedersberger, Karl-Heinz
2003-09-01
Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Application of parallelized software architecture to an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam
2011-01-01
This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.
Vision requirements for Space Station applications
NASA Technical Reports Server (NTRS)
Crouse, K. R.
1985-01-01
Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.
Vision-based vehicle detection and tracking algorithm design
NASA Astrophysics Data System (ADS)
Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi
2009-12-01
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.
Data acquisition and analysis of range-finding systems for spacing construction
NASA Technical Reports Server (NTRS)
Shen, C. N.
1981-01-01
For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives.
Vision guided landing of an an autonomous helicopter in hazardous terrain
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Montgomery, Jim
2005-01-01
Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
Autonomous Energy Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey
With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less
ALHAT COBALT: CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M.
2015-01-01
The COBALT project is a flight demonstration of two NASA ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) capabilities that are key for future robotic or human landing GN&C (Guidance, Navigation and Control) systems. The COBALT payload integrates the Navigation Doppler Lidar (NDL) for ultraprecise velocity and range measurements with the Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. Terrestrial flight tests of the COBALT payload in an open-loop and closed-loop GN&C configuration will be conducted onboard a commercial, rocket-propulsive Vertical Test Bed (VTB) at a test range in Mojave, CA.
Development of a mobile robot for the 1995 AUVS competition
NASA Astrophysics Data System (ADS)
Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.
1995-12-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.
Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-il
2017-01-01
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel’s global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point’s plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel. PMID:28141829
Xu, Zirui; Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-Il
2017-01-01
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.
Vision Based Navigation for Autonomous Cooperative Docking of CubeSats
NASA Astrophysics Data System (ADS)
Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker
2018-05-01
A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.
Recent Advances in Bathymetric Surveying of Continental Shelf Regions Using Autonomous Vehicles
NASA Astrophysics Data System (ADS)
Holland, K. T.; Calantoni, J.; Slocum, D.
2016-02-01
Obtaining bathymetric observations within the continental shelf in areas closer to the shore is often time consuming and dangerous, especially when uncharted shoals and rocks present safety concerns to survey ships and launches. However, surveys in these regions are critically important to numerical simulation of oceanographic processes, as bathymetry serves as the bottom boundary condition in operational forecasting models. We will present recent progress in bathymetric surveying using both traditional vessels retrofitted for autonomous operations and relatively inexpensive, small team deployable, Autonomous Underwater Vehicles (AUV). Both systems include either high-resolution multibeam echo sounders or interferometric sidescan sonar sensors with integrated inertial navigation system capabilities consistent with present commercial-grade survey operations. The advantages and limitations of these two configurations employing both unmanned and autonomous strategies are compared using results from several recent survey operations. We will demonstrate how sensor data collected from unmanned platforms can augment or even replace traditional data collection technologies. Oceanographic observations (e.g., sound speed, temperature and currents) collected simultaneously with bathymetry using autonomous technologies provide additional opportunities for advanced data assimilation in numerical forecasts. Discussion focuses on our vision for unmanned and autonomous systems working in conjunction with manned or in-situ systems to optimally and simultaneously collect data in environmentally hostile or difficult to reach areas.
Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph
2017-09-26
Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.
State-Estimation Algorithm Based on Computer Vision
NASA Technical Reports Server (NTRS)
Bayard, David; Brugarolas, Paul
2007-01-01
An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.
Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle
Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou
2012-01-01
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.
Telerobotic controller development
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, Ken; Rhoades, Don
1987-01-01
To meet NASA's space station's needs and growth, a modular and generic approach to robotic control which provides near-term implementation with low development cost and capability for growth into more autonomous systems was developed. The method uses a vision based robotic controller and compliant hand integrated with the Remote Manipulator System arm on the Orbiter. A description of the hardware and its system integration is presented.
System of technical vision for autonomous unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bondarchuk, A. S.
2018-05-01
This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.
Active vision in satellite scene analysis
NASA Technical Reports Server (NTRS)
Naillon, Martine
1994-01-01
In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.
Three-Dimensional Images For Robot Vision
NASA Astrophysics Data System (ADS)
McFarland, William D.
1983-12-01
Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.
NASA Technical Reports Server (NTRS)
Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry
2012-01-01
Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
GAELIC: Consortial Strategies for Survival.
ERIC Educational Resources Information Center
Edwards, Heather
This paper discusses GAELIC (Gauteng and Environs Library Consortium), a regional consortium of South African higher education libraries with the vision of creating a virtual library by linking together autonomous libraries via networks. Topics covered include: (1) GAELIC objectives; (2) selection of INNOPAC as the common library system for GAELIC…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noakes, Mark W; Garcia, Pablo; Rosen, Jacob
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures autonomously or in a teleoperative mode on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital can be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing select surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robotmore » is supported by autonomous arms that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as those performed manually by nurses. The TP system also includes tomographic X-ray facility for patient diagnosis and 2-D fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. These capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.« less
NASA Astrophysics Data System (ADS)
Digney, Bruce L.
2007-04-01
Unmanned vehicle systems is an attractive technology for the military, but whose promises have remained largely undelivered. There currently exist fielded remote controlled UGVs and high altitude UAV whose benefits are based on standoff in low complexity environments with sufficiently low control reaction time requirements to allow for teleoperation. While effective within there limited operational niche such systems do not meet with the vision of future military UxV scenarios. Such scenarios envision unmanned vehicles operating effectively in complex environments and situations with high levels of independence and effective coordination with other machines and humans pursing high level, changing and sometimes conflicting goals. While these aims are clearly ambitious they do provide necessary targets and inspiration with hopes of fielding near term useful semi-autonomous unmanned systems. Autonomy involves many fields of research including machine vision, artificial intelligence, control theory, machine learning and distributed systems all of which are intertwined and have goals of creating more versatile broadly applicable algorithms. Cohort is a major Applied Research Program (ARP) led by Defence R&D Canada (DRDC) Suffield and its aim is to develop coordinated teams of unmanned vehicles (UxVs) for urban environments. This paper will discuss the critical science being addressed by DRDC developing semi-autonomous systems.
A stereo vision-based obstacle detection system in vehicles
NASA Astrophysics Data System (ADS)
Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun
2008-02-01
Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Machine Vision Applied to Navigation of Confined Spaces
NASA Technical Reports Server (NTRS)
Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.
2004-01-01
The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.
NASA Technical Reports Server (NTRS)
Ettinger, Scott M.; Nechyba, Michael C.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Substantial progress has been made recently towards design building and test-flying remotely piloted Micro Air Vehicle's (MAVs). We seek to complement this progress in overcoming the aerodynamic obstacles to.flight at very small scales with a vision stability and autonomy system. The developed system based on a robust horizon detection algorithm which we discuss in greater detail in a companion paper. In this paper, we first motivate the use of computer vision for MAV autonomy arguing that given current sensor technology, vision may he the only practical approach to the problem. We then briefly review our statistical vision-based horizon detection algorithm, which has been demonstrated at 30Hz with over 99.9% correct horizon identification. Next we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feed-back controller for self-stabilized flight, and report results on vision autonomous flights of duration exceeding ten minutes.
Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
An Expert Vision System for Autonomous Land Vehicle Road Following.
1988-01-01
TR-138, Center for Automa- tioii Hesearch, University of Maryland, July 1985. ’Miinskyl Minsky , Marvin , "A Framework for Representing Knowledge", in...relationships, frames have been chosen to model objects , Minsky ]. A frame is a data structure containing a set of slots (or attributes) which en- capsulate
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-01-01
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-09-10
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.
A parallel implementation of a multisensor feature-based range-estimation method
NASA Technical Reports Server (NTRS)
Suorsa, Raymond E.; Sridhar, Banavar
1993-01-01
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.
Autonomous Cryogenics Loading Operations Simulation Software: Knowledgebase Autonomous Test Engineer
NASA Technical Reports Server (NTRS)
Wehner, Walter S., Jr.
2013-01-01
Working on the ACLO (Autonomous Cryogenics Loading Operations) project I have had the opportunity to add functionality to the physics simulation software known as KATE (Knowledgebase Autonomous Test Engineer), create a new application allowing WYSIWYG (what-you-see-is-what-you-get) creation of KATE schematic files and begin a preliminary design and implementation of a new subsystem that will provide vision services on the IHM (Integrated Health Management) bus. The functionality I added to KATE over the past few months includes a dynamic visual representation of the fluid height in a pipe based on number of gallons of fluid in the pipe and implementing the IHM bus connection within KATE. I also fixed a broken feature in the system called the Browser Display, implemented many bug fixes and made changes to the GUI (Graphical User Interface).
Building a robust vehicle detection and classification module
NASA Astrophysics Data System (ADS)
Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry
2015-12-01
The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
Open-Loop Performance of COBALT Precision Landing Payload on a Commercial Sub-Orbital Rocket
NASA Technical Reports Server (NTRS)
Restrepo, Carolina I.; Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Lovelace, Ronney S.; McCarthy, Megan M.; Tse, Teming; Stelling, Richard; Collins, Steven M.
2018-01-01
An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a navigation solution that is independent of GPS and suitable for future, autonomous, planetary, landing systems. COBALT was a passive payload during the open loop tests. COBALT's sensors were actively taking data and processing it in real time, but the Xodiac rocket flew with its own GPS-navigation system as a risk reduction activity in the maturation of the technologies towards space flight. A future closed-loop test campaign is planned where the COBALT navigation solution will be used to fly its host vehicle.
Trauma Pod: a semi-automated telerobotic surgical system.
Garcia, Pablo; Rosen, Jacob; Kapoor, Chetan; Noakes, Mark; Elbert, Greg; Treat, Michael; Ganous, Tim; Hanson, Matt; Manak, Joe; Hasser, Chris; Rohler, David; Satava, Richard
2009-06-01
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures, autonomously or in a teleoperative mode, on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital could be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing selected surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robot is supported by autonomous robotic arms and subsystems that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as performed manually by nurses. Tracking and counting of the supplies is performed automatically. The TP system also includes a tomographic X-ray facility for patient diagnosis and two-dimensional (2D) fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. Automation and teleoperation capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.
ACE [Adult and Community Education] into the 21st Century: A Vision.
ERIC Educational Resources Information Center
Adult, Community, and Further Education Board, Melbourne (Australia).
This document outlines a vision for adult and community education (ACE) in Victoria for the next 3 years and provides a broad map of how to reach that vision. A description of the context is followed by the ACE vision statement: ACE delivers accessible, quality, and timely learning in autonomous, community settings as a valued and essential…
NASA Astrophysics Data System (ADS)
Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan
2017-12-01
Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.
Autonomous onboard optical processor for driving aid
NASA Astrophysics Data System (ADS)
Attia, Mondher; Servel, Alain; Guibert, Laurent
1995-01-01
We take advantage of recent technological advances in the field of ferroelectric liquid crystal silicon back plane optoelectronic devices. These are well suited to perform massively parallel processing tasks. That choice enables the design of low cost vision systems and allows the implementation of an on-board system. We focus on transport applications such as road sign recognition. Preliminary in-car experimental results are presented.
Ground vehicle control at NIST: From teleoperation to autonomy
NASA Technical Reports Server (NTRS)
Murphy, Karl N.; Juberts, Maris; Legowik, Steven A.; Nashman, Marilyn; Schneiderman, Henry; Scott, Harry A.; Szabo, Sandor
1994-01-01
NIST is applying their Real-time Control System (RCS) methodology for control of ground vehicles for both the U.S. Army Researh Lab, as part of the DOD's Unmanned Ground Vehicles program, and for the Department of Transportation's Intelligent Vehicle/Highway Systems (IVHS) program. The actuated vehicle, a military HMMWV, has motors for steering, brake, throttle, etc. and sensors for the dashboard gauges. For military operations, the vehicle has two modes of operation: a teleoperation mode--where an operator remotely controls the vehicle over an RF communications network; and a semi-autonomous mode called retro-traverse--where the control system uses an inertial navigation system to steer the vehicle along a prerecorded path. For the IVHS work, intelligent vision processing elements replace the human teleoperator to achieve autonomous, visually guided road following.
Steering of an automated vehicle in an unstructured environment
NASA Astrophysics Data System (ADS)
Kanakaraju, Sampath; Shanmugasundaram, Sathish K.; Thyagarajan, Ramesh; Hall, Ernest L.
1999-08-01
The purpose of this paper is to describe a high-level path planning logic, which processes the data from a vision system and an ultrasonic obstacle avoidance system and steers an autonomous mobile robot between obstacles. The test bed was an autonomous root built at University of Cincinnati, and this logic was tested and debugged on this machine. Attempts have already been made to incorporate fuzzy system on a similar robot, and this paper extends them to take advantage of the robot's ZTR capability. Using the integrated vision syste, the vehicle senses its location and orientation. A rotating ultrasonic sensor is used to map the location and size of possible obstacles. With these inputs the fuzzy logic controls the speed and the steering decisions of the robot. With the incorporation of this logic, it has been observed that Bearcat II has been very successful in avoiding obstacles very well. This was achieved in the Ground Robotics Competition conducted by the AUVS in June 1999, where it travelled a distance of 154 feet in a 10ft. wide path ridden with obstacles. This logic proved to be a significant contributing factor in this feat of Bearcat II.
Navigation of military and space unmanned ground vehicles in unstructured terrains
NASA Technical Reports Server (NTRS)
Lescoe, Paul; Lavery, David; Bedard, Roger
1991-01-01
Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.
Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles
Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro
2016-01-01
The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional–integral–derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle. PMID:27110793
Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.
Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro
2016-04-22
The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.
Autonomous navigation and control of a Mars rover
NASA Technical Reports Server (NTRS)
Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.
1990-01-01
A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.
Supervised autonomous robotic soft tissue surgery.
Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W
2016-05-04
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. Copyright © 2016, American Association for the Advancement of Science.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
ROS-based ground stereo vision detection: implementation and experiments.
Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng
This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; de Saussure, G.; Spelt, P.F.
1988-01-01
This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less
Automation and robotics for Space Station in the twenty-first century
NASA Technical Reports Server (NTRS)
Willshire, K. F.; Pivirotto, D. L.
1986-01-01
Space Station telerobotics will evolve beyond the initial capability into a smarter and more capable system as we enter the twenty-first century. Current technology programs including several proposed ground and flight experiments to enable development of this system are described. Advancements in the areas of machine vision, smart sensors, advanced control architecture, manipulator joint design, end effector design, and artificial intelligence will provide increasingly more autonomous telerobotic systems.
Speed control for a mobile robot
NASA Astrophysics Data System (ADS)
Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Shared Perception for Autonomous Systems
2015-08-24
minivan or sport utility vehicle (SUV) may be around 1.8 meters tall. Next, a height distribution of ~ 1.5, 0.3 was used to project the car detections...Vision, vol. 60, no. 2, 2004, pp. 91–110. 4. N. Snavely, S.M. Seitz, and R. Szeliski, “Photo Tourism : Exploring Photo Collections in 3D,” Proceedings of
What Would a Socially Just Education System Look like?
ERIC Educational Resources Information Center
Nandy, Lisa
2012-01-01
The Coalition came to power in 2010 with a radical, controversial vision for education and set about transforming the UK landscape. The Secretary of State described how, in future, autonomous schools and colleges would compete with one another to raise standards, high performers would expand and others improve or disappear. His aim is for children…
Teacher Researchers in Action Research in a Heavily Centralized Education System
ERIC Educational Resources Information Center
Kayaoglu, M. Naci
2015-01-01
Action research is characterized by a new paradigm of empowering teachers to monitor their own practices in a more autonomous manner with a vision of challenging and improving their own techniques of teaching through their own participatory research. Yet in spite of this apparently radical shift in the function of the teacher from the constant…
Magician Simulator. A Realistic Simulator for Heterogenous Teams of Autonomous Robots
2011-01-18
IMU, and LIDAR systems for identifying and tracking mobile OOI at long range (>20m), providing early warnings and allowing neutralization from a... LIDAR and Computer Vision template-based feature tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous...Locali- zation and Mapping ( SLAM ). Our system contains two maps, a physical map and an influence map (location of hostile OOI, explored and unexplored
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
Learning for autonomous navigation : extrapolating from underfoot to the far field
NASA Technical Reports Server (NTRS)
Matthies, Larry; Turmon, Michael; Howard, Andrew; Angelova, Anelia; Tang, Benyang; Mjolsness, Eric
2005-01-01
Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3-D geometry and learning from proprioception, and describe initial instantiations of them we have developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain, learning to extend the lookahead range of the vision system, and learning how slip varies with slope.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F
2016-03-05
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.
2016-01-01
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
Integrated navigation, flight guidance, and synthetic vision system for low-level flight
NASA Astrophysics Data System (ADS)
Mehler, Felix E.
2000-06-01
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
NASA Technical Reports Server (NTRS)
Kearney, Lara
2004-01-01
In January 2004, the President announced a new Vision for Space Exploration. NASA's Office of Exploration Systems has identified Extravehicular Activity (EVA) as a critical capability for supporting the Vision for Space Exploration. EVA is required for all phases of the Vision, both in-space and planetary. Supporting the human outside the protective environment of the vehicle or habitat and allow ing him/her to perform efficient and effective work requires an integrated EVA "System of systems." The EVA System includes EVA suits, airlocks, tools and mobility aids, and human rovers. At the core of the EVA System is the highly technical EVA suit, which is comprised mainly of a life support system and a pressure/environmental protection garment. The EVA suit, in essence, is a miniature spacecraft, which combines together many different sub-systems such as life support, power, communications, avionics, robotics, pressure systems and thermal systems, into a single autonomous unit. Development of a new EVA suit requires technology advancements similar to those required in the development of a new space vehicle. A majority of the technologies necessary to develop advanced EVA systems are currently at a low Technology Readiness Level of 1-3. This is particularly true for the long-pole technologies of the life support system.
Enabling Autonomous Navigation for Affordable Scooters.
Liu, Kaikai; Mulky, Rajathswaroop
2018-06-05
Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.
Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)
NASA Technical Reports Server (NTRS)
Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey
1990-01-01
Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.
Autonomous unmanned air vehicles (UAV) techniques
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Lee, Ting N.
2007-04-01
The UAVs (Unmanned Air Vehicles) have great potentials in different civilian applications, such as oil pipeline surveillance, precision farming, forest fire fighting (yearly), search and rescue, boarder patrol, etc. The related industries of UAVs can create billions of dollars for each year. However, the road block of adopting UAVs is that it is against FAA (Federal Aviation Administration) and ATC (Air Traffic Control) regulations. In this paper, we have reviewed the latest technologies and researches on UAV navigation and obstacle avoidance. We have purposed a system design of Jittering Mosaic Image Processing (JMIP) with stereo vision and optical flow to fulfill the functionalities of autonomous UAVs.
A Future Vision for Remotely Piloted Aircraft: Leveraging Interoperability and Networked Operations
2013-06-21
over the next 25 years Balances the effects envisioned in the USAF UAS Flight Plan with the reality of constrained resources and ambitious...theater-level unmanned systems must detect, avoid, or counter threats – operating from permissive to highly contested access in all weather...Rapid Reaction Group II/III SUAS Unit Light Footprint, Low Cost ISR Option Networked Autonomous C2 System Air-Launched SUAS Common
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
A design approach for small vision-based autonomous vehicles
NASA Astrophysics Data System (ADS)
Edwards, Barrett B.; Fife, Wade S.; Archibald, James K.; Lee, Dah-Jye; Wilde, Doran K.
2006-10-01
This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding environment. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.
A simple approach to a vision-guided unmanned vehicle
NASA Astrophysics Data System (ADS)
Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye
2005-10-01
This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.
Libration Point Navigation Concepts Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Quinn, David A.
2004-01-01
This work examines the autonomous navigation accuracy achievable for a lunar exploration trajectory from a translunar libration point lunar navigation relay satellite, augmented by signals from the Global Positioning System (GPS). We also provide a brief analysis comparing the libration point relay to lunar orbit relay architectures, and discuss some issues of GPS usage for cis-lunar trajectories.
Toward detection of marine vehicles on horizon from buoy camera
NASA Astrophysics Data System (ADS)
Fefilatyev, Sergiy; Goldgof, Dmitry B.; Langebrake, Lawrence
2007-10-01
This paper presents a new technique for automatic detection of marine vehicles in open sea from a buoy camera system using computer vision approach. Users of such system include border guards, military, port safety and flow management, sanctuary protection personnel. The system is intended to work autonomously, taking images of the surrounding ocean surface and analyzing them on the subject of presence of marine vehicles. The goal of the system is to detect an approximate window around the ship and prepare the small image for transmission and human evaluation. The proposed computer vision-based algorithm combines horizon detection method with edge detection and post-processing. The dataset of 100 images is used to evaluate the performance of proposed technique. We discuss promising results of ship detection and suggest necessary improvements for achieving better performance.
A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Mcfadyen, Aaron; Mejias, Luis
2016-01-01
This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.
Homeostatic study of the effects of sportswear color on the contest outcome
NASA Astrophysics Data System (ADS)
Yuan, Jian-Qin; Liu, Timon Cheng-Yi; Wu, Ren-Le; Ruan, Chang-Xiong; He, Li-Mei; Liu, Song-Hao
2008-12-01
There are effects of sportswear color on the contest outcome. It has been explained from the psychological and perceptual viewpoints, respectively. It was studied by integrating the homeostatic theory of exercise training and autonomic nervous model of color vision in this paper. It was found that the effects of sportswear color on the contest outcome depend on autonomic nervous homeostasis (ANH). Color can be classified into hot color such as red, orange and yellow and cold color such as green, blue and violet. If the athletes have been in ANH, there are no effects of sportswear color on the contest outcome. If the autonomic nervous system is far from ANH due to exercise induced fatigue, wearing cold color had no predominance for cold-hot matches, and wearing white had no predominance for white-color matches.
Object positioning in storages of robotized workcells using LabVIEW Vision
NASA Astrophysics Data System (ADS)
Hryniewicz, P.; Banaś, W.; Sękala, A.; Gwiazda, A.; Foit, K.; Kost, G.
2015-11-01
During the manufacturing process, each performed task is previously developed and adapted to the conditions and the possibilities of the manufacturing plant. The production process is supervised by a team of specialists because any downtime causes great loss of time and hence financial loss. Sensors used in industry for tracking and supervision various stages of a production process make it much easier to maintain it continuous. One of groups of sensors used in industrial applications are non-contact sensors. This group includes: light barriers, optical sensors, rangefinders, vision systems, and ultrasonic sensors. Through to the rapid development of electronics the vision systems were widespread as the most flexible type of non-contact sensors. These systems consist of cameras, devices for data acquisition, devices for data analysis and specialized software. Vision systems work well as sensors that control the production process itself as well as the sensors that control the product quality level. The LabVIEW program as well as the LabVIEW Vision and LabVIEW Builder represent the application that enables program the informatics system intended to process and product quality control. The paper presents elaborated application for positioning elements in a robotized workcell. Basing on geometric parameters of manipulated object or on the basis of previously developed graphical pattern it is possible to determine the position of particular manipulated elements. This application could work in an automatic mode and in real time cooperating with the robot control system. It allows making the workcell functioning more autonomous.
Finding Multiple Lanes in Urban Road Networks with Vision and Lidar
2009-03-24
drawbacks. First, the cost of updating and main- taining millions of kilometers of roadway is prohibitive. Second, the danger of autonomous vehicles perceiving... autonomous vehicles . We propose that a data infrastructure is useful for topological information and sparse geometry, but reject relying upon it for dense...Throughout the race, approximately 50 human- driven and autonomous vehicles were simultaneously active, thus providing realistic traffic scenarios. Our most
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
Integrating Sensory/Actuation Systems in Agricultural Vehicles
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles. PMID:24577525
Integrating sensory/actuation systems in agricultural vehicles.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-02-26
In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles.
NASA Astrophysics Data System (ADS)
Black, Randy; Bai, Haowei; Michalicek, Andrew; Shelton, Blaine; Villela, Mark
2008-01-01
Currently, autonomy in space applications is limited by a variety of technology gaps. Innovative application of wireless technology and avionics architectural principles drawn from the Orion crew exploration vehicle provide solutions for several of these gaps. The Vision for Space Exploration envisions extensive use of autonomous systems. Economic realities preclude continuing the level of operator support currently required of autonomous systems in space. In order to decrease the number of operators, more autonomy must be afforded to automated systems. However, certification authorities have been notoriously reluctant to certify autonomous software in the presence of humans or when costly missions may be jeopardized. The Orion avionics architecture, drawn from advanced commercial aircraft avionics, is based upon several architectural principles including partitioning in software. Robust software partitioning provides "brick wall" separation between software applications executing on a single processor, along with controlled data movement between applications. Taking advantage of these attributes, non-deterministic applications can be placed in one partition and a "Safety" application created in a separate partition. This "Safety" partition can track the position of astronauts or critical equipment and prevent any unsafe command from executing. Only the Safety partition need be certified to a human rated level. As a proof-of-concept demonstration, Honeywell has teamed with the Ultra WideBand (UWB) Working Group at NASA Johnson Space Center to provide tracking of humans, autonomous systems, and critical equipment. Using UWB the NASA team can determine positioning to within less than one inch resolution, allowing a Safety partition to halt operation of autonomous systems in the event that an unplanned collision is imminent. Another challenge facing autonomous systems is the coordination of multiple autonomous agents. Current approaches address the issue as one of networking and coordination of multiple independent units, each with its own mission. As a proof-of-concept Honeywell is developing and testing various algorithms that lead to a deterministic, fault tolerant, reliable wireless backplane. Just as advanced avionics systems control several subsystems, actuators, sensors, displays, etc.; a single "master" autonomous agent (or base station computer) could control multiple autonomous systems. The problem is simplified to controlling a flexible body consisting of several sensors and actuators, rather than one of coordinating multiple independent units. By filling technology gaps associated with space based autonomous system, wireless technology and Orion architectural principles provide the means for decreasing operational costs and simplifying problems associated with collaboration of multiple autonomous systems.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
NASA Astrophysics Data System (ADS)
Altug, Erdinc
Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program
1991-10-01
Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Autonomous Intersection Management
2009-12-01
is irrelevant. Fortunately, researchers are attacking this problem with many techniques. In 2004, Honda introduced an intelligent night vision system...or less a solved problem . The problem itself is not too difficult: there are no pedestrians or cyclists and vehicles travel in the same direction at...organized according to the following subgoals, each of which is a contribution of the thesis. 1. Problem definition First, this thesis contributes a
Self-localization for an autonomous mobile robot based on an omni-directional vision system
NASA Astrophysics Data System (ADS)
Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin
2013-12-01
In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.
NASA Technical Reports Server (NTRS)
Chen, Alexander Y.
1990-01-01
Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.
Probabilistic Tracking and Trajectory Planning for Autonomous Ground Vehicles in Urban Environments
2016-03-05
SECURITY CLASSIFICATION OF: The aim of this research is to develop a unified theory for perception and planning in autonomous ground vehicles, with a...Report Title The aim of this research is to develop a unified theory for perception and planning in autonomous ground vehicles, with a specific focus on...a combination of experimentally collected vision data and Monte- Carlo simulations. Smoothing for improved perception and robustness in planning
Research on detection method of UAV obstruction based on binocular vision
NASA Astrophysics Data System (ADS)
Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao
2018-04-01
For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1991-01-01
The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
The influence of active vision on the exoskeleton of intelligent agents
NASA Astrophysics Data System (ADS)
Smith, Patrice; Terry, Theodore B.
2016-04-01
Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.
1971-01-01
Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.
2013-04-22
Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with Nominal input L1 ‐based control generator This L1 adaptive control architecture uses data from the reference model
2013-10-29
COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...based on contextual information, 3) develop vision-based techniques for learning of contextual information, and detection and identification of...that takes into account many possible contexts. The probability distributions of these contexts will be learned from existing databases on common sense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.
Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less
Incidental Transient Cortical Blindness after Lung Resection.
Oncel, Murat; Sunam, Guven Sadi; Varoglu, Asuman Orhan; Karabagli, Hakan; Yildiran, Huseyin
2016-03-01
Transient vision loss after major surgical procedures is a rare clinical complication. The most common etiologies are cardiac, spinal, head, and neck surgeries. There has been no report on vision loss after lung resection. A 65-year-old man was admitted to our clinic with lung cancer. Resection was performed using right upper lobectomy with no complications. Cortical blindness developed 12 hours later in the postoperative period. Results from magnetic resonance imaging and diffusion-weighted investigations were normal. The neurologic examination was normal. The blood glucose level was 92 mg/dL and blood gas analysis showed a PO 2 of 82 mm Hg. After 24 hours, the patient began to see and could count fingers, and his vision was fully restored within 72 hours after this point. Autonomic dysfunction due to impaired microvascular structures in diabetes mellitus may induce posterior circulation dysfunction, even when the hemodynamic state is normal in the perioperative period. The physician must keep in mind that vision loss may occur after lung resection due to autonomic dysfunction, especially in older patients with diabetes mellitus.
Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation
2010-01-01
Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training. PMID:20731834
Developing operation algorithms for vision subsystems in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Shikhman, M. V.; Shidlovskiy, S. V.
2018-05-01
The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.
Working and Learning with Knowledge in the Lobes of a Humanoid's Mind
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David
2003-01-01
Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).
Strauven, Wanda
2018-01-01
This commentary on Edwin Carels' essay "Revisiting Tom Tom : Performative anamnesis and autonomous vision in Ken Jacobs' appropriations of Tom Tom the Piper's Son " broadens up the media-archaeological framework in which Carels places his text. Notions such as Huhtamo's topos and Zielinski's "deep time" are brought into the discussion in order to point out the difficulty to see what there is to see and to question the position of the viewer in front of experimental films like Tom Tom the Piper's Son and its remakes.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
Open-Loop Flight Testing of COBALT GN&C Technologies for Precise Soft Landing
NASA Technical Reports Server (NTRS)
Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Restrepo, Carolina I.
2017-01-01
A terrestrial, open-loop (OL) flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed, with support through the NASA Advanced Exploration Systems (AES), Game Changing Development (GCD), and Flight Opportunities (FO) Programs. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuzes the NDL and LVS data in real time to produce a precise navigation solution that is independent of the Global Positioning System (GPS) and suitable for future, autonomous planetary landing systems. The OL campaign tested COBALT as a passive payload, with COBALT data collection and filter execution, but with the Xodiac vehicle Guidance and Control (G&C) loops closed on a Masten GPS-based navigation solution. The OL test was performed as a risk reduction activity in preparation for an upcoming 2017 closed-loop (CL) flight campaign in which Xodiac G&C will act on the COBALT navigation solution and the GPS-based navigation will serve only as a backup monitor.
NASA Astrophysics Data System (ADS)
Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.
2012-10-01
We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.
RoBlock: a prototype autonomous manufacturing cell
NASA Astrophysics Data System (ADS)
Baekdal, Lars K.; Balslev, Ivar; Eriksen, Rene D.; Jensen, Soren P.; Jorgensen, Bo N.; Kirstein, Brian; Kristensen, Bent B.; Olsen, Martin M.; Perram, John W.; Petersen, Henrik G.; Petersen, Morten L.; Ruhoff, Peter T.; Skjolstrup, Carl E.; Sorensen, Anders S.; Wagenaar, Jeroen M.
2000-10-01
RoBlock is the first phase of an internally financed project at the Institute aimed at building a system in which two industrial robots suspended from a gantry, as shown below, cooperate to perform a task specified by an external user, in this case, assembling an unstructured collection of colored wooden blocks into a specified 3D pattern. The blocks are identified and localized using computer vision and grasped with a suction cup mechanism. Future phases of the project will involve other processes such as grasping and lifting, as well as other types of robot such as autonomous vehicles or variable geometry trusses. Innovative features of the control software system include: The use of an advanced trajectory planning system which ensures collision avoidance based on a generalization of the method of artificial potential fields, the use of a generic model-based controller which learns the values of parameters, including static and kinetic friction, of a detailed mechanical model of itself by comparing actual with planned movements, the use of fast, flexible, and robust pattern recognition and 3D-interpretation strategies, integration of trajectory planning and control with the sensor systems in a distributed Java application running on a network of PC's attached to the individual physical components. In designing this first stage, the aim was to build in the minimum complexity necessary to make the system non-trivially autonomous and to minimize the technological risks. The aims of this project, which is planned to be operational during 2000, are as follows: To provide a platform for carrying out experimental research in multi-agent systems and autonomous manufacturing systems, to test the interdisciplinary cooperation architecture of the Maersk Institute, in which researchers in the fields of applied mathematics (modeling the physical world), software engineering (modeling the system) and sensor/actuator technology (relating the virtual and real worlds) could collaborate with systems integrators to construct intelligent, autonomous systems, and to provide a showpiece demonstrator in the entrance hall of the Institute's new building.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Szofran, Frank; Bassler, Julie A.; Schlagheck, Ronald A.; Cook, Mary Beth
2005-01-01
The Microgravity Materials Science Program established a strong research capability through partnerships between NASA and the scientific research community. With the announcement of the vision for space exploration, additional emphasis in strategic materials science areas was necessary. The President's Commission recognized that achieving its exploration objectives would require significant technical innovation, research, and development in focal areas defined as "enabling technologies." Among the 17 enabling technologies identified for initial focus were: advanced structures, advanced power and propulsion; closed-loop life support and habitability; extravehicular activity systems; autonomous systems and robotics; scientific data collection and analysis, biomedical risk mitigation; and planetary in situ resource utilization. Mission success may depend upon use of local resources to fabricate a replacement part to repair a critical system. Future propulsion systems will require materials with a wide range of mechanical, thermophysical, and thermochemical properties, many of them well beyond capabilities of today's materials systems. Materials challenges have also been identified by experts working to develop advanced life support systems. In responding to the vision for space exploration, the Microgravity Materials Science Program aggressively transformed its research portfolio and focused materials science areas of emphasis to include space radiation shielding; in situ fabrication and repair for life support systems; in situ resource utilization for life support consumables; and advanced materials for exploration, including materials science for space propulsion systems and for life support systems. The purpose of this paper is to inform the scientific community of these new research directions and opportunities to utilize their materials science expertise and capabilities to support the vision for space exploration.
Fundamentals and advances in the development of remote welding fabrication systems
NASA Technical Reports Server (NTRS)
Agapakis, J. E.; Masubuchi, K.; Von Alt, C.
1986-01-01
Operational and man-machine issues for welding underwater, in outer space, and at other remote sites are investigated, and recent process developments are described. Probable remote welding missions are classified, and the essential characteristics of fundamental remote welding tasks are analyzed. Various possible operational modes for remote welding fabrication are identified, and appropriate roles for humans and machines are suggested. Human operator performance in remote welding fabrication tasks is discussed, and recent advances in the development of remote welding systems are described, including packaged welding systems, stud welding systems, remotely operated welding systems, and vision-aided remote robotic welding and autonomous welding systems.
Immune systems are not just for making you feel better: they are for controlling autonomous robots
NASA Astrophysics Data System (ADS)
Rosenblum, Mark
2005-05-01
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
A Space Station robot walker and its shared control software
NASA Technical Reports Server (NTRS)
Xu, Yangsheng; Brown, Ben; Aoki, Shigeru; Yoshida, Tetsuji
1994-01-01
In this paper, we first briefly overview the update of the self-mobile space manipulator (SMSM) configuration and testbed. The new robot is capable of projecting cameras anywhere interior or exterior of the Space Station Freedom (SSF), and will be an ideal tool for inspecting connectors, structures, and other facilities on SSF. Experiments have been performed under two gravity compensation systems and a full-scale model of a segment of SSF. This paper presents a real-time shared control architecture that enables the robot to coordinate autonomous locomotion and teleoperation input for reliable walking on SSF. Autonomous locomotion can be executed based on a CAD model and off-line trajectory planning, or can be guided by a vision system with neural network identification. Teleoperation control can be specified by a real-time graphical interface and a free-flying hand controller. SMSM will be a valuable assistant for astronauts in inspection and other EVA missions.
NASA Technical Reports Server (NTRS)
Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.
2015-01-01
NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.
Multiple-modality program for standoff detection of roadside hazards
NASA Astrophysics Data System (ADS)
Williams, Kathryn; Middleton, Seth; Close, Ryan; Luke, Robert H.; Suri, Rajiv
2016-05-01
The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is executing a program to assess the performance of a variety of sensor modalities for standoff detection of roadside explosive hazards. The program objective is to identify an optimal sensor or combination of fused sensors to incorporate with autonomous detection algorithms into a system of systems for use in future route clearance operations. This paper provides an overview of the program, including a description of the sensors under consideration, sensor test events, and ongoing data analysis.
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.
2004-01-01
The Sensor Web concept emerged as the number of Earth Science Satellites began to increase in the recent years. The idea, part of a vision for the future of earth science, was that the sensor systems would be linked in an active way to provide improved forecast capability. This means that a system that is nearly autonomous would need to be developed to allow the satellites to re-target and deploy assets for particular phenomena or provide on board processing for real time data. This talk will describe several elements of the sensor web.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Astrophysics Data System (ADS)
Alhorn, Dean C.
2005-02-01
The new U.S. National Vision for Space Exploration requires many new enabling technologies to accomplish the goals of space commercialization and returning humans to the moon and extraterrestrial environments. Traditionally, flight elements are complete sub-systems requiring humans to complete the integration and assembly. These bulky structures also require the use of heavy launch vehicles to send the units to a desired location. This philosophy necessitates a high degree of safety, numerous space walks at a significant cost. Future space mission costs must be reduced and safety increased to reasonably achieve exploration goals. One proposed concept is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly. Assembly is autonomously performed when two components join after determining that specifications are correct. Local sensors continue monitor joint integrity post assembly, which is critical for safety and structural reliability. Achieving this concept requires a change in space structure design philosophy and the development of innovative technologies to perform autonomous assembly. Assembly of large space structures will require significant numbers of integrity sensors. Thus simple, low-cost sensors are integral to the success of this concept. This paper addresses these issues and proposes a novel concept for assembling space structures autonomously. Core technologies required to achieve in space assembly are presented. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Additionally, these novel technologies can be applied to other systems both on earth and extraterrestrial environments.
Sabel, Bernhard A; Wang, Jiaqi; Cárdenas-Morales, Lizbeth; Faiq, Muneeb; Heim, Christine
2018-06-01
The loss of vision after damage to the retina, optic nerve, or brain has often grave consequences in everyday life such as problems with recognizing faces, reading, or mobility. Because vision loss is considered to be irreversible and often progressive, patients experience continuous mental stress due to worries, anxiety, or fear with secondary consequences such as depression and social isolation. While prolonged mental stress is clearly a consequence of vision loss, it may also aggravate the situation. In fact, continuous stress and elevated cortisol levels negatively impact the eye and brain due to autonomous nervous system (sympathetic) imbalance and vascular dysregulation; hence stress may also be one of the major causes of visual system diseases such as glaucoma and optic neuropathy. Although stress is a known risk factor, its causal role in the development or progression of certain visual system disorders is not widely appreciated. This review of the literature discusses the relationship of stress and ophthalmological diseases. We conclude that stress is both consequence and cause of vision loss. This creates a vicious cycle of a downward spiral, in which initial vision loss creates stress which further accelerates vision loss, creating even more stress and so forth. This new psychosomatic perspective has several implications for clinical practice. Firstly, stress reduction and relaxation techniques (e.g., meditation, autogenic training, stress management training, and psychotherapy to learn to cope) should be recommended not only as complementary to traditional treatments of vision loss but possibly as preventive means to reduce progression of vision loss. Secondly, doctors should try their best to inculcate positivity and optimism in their patients while giving them the information the patients are entitled to, especially regarding the important value of stress reduction. In this way, the vicious cycle could be interrupted. More clinical studies are now needed to confirm the causal role of stress in different low vision diseases to evaluate the efficacy of different anti-stress therapies for preventing progression and improving vision recovery and restoration in randomized trials as a foundation of psychosomatic ophthalmology.
Vision based speed breaker detection for autonomous vehicle
NASA Astrophysics Data System (ADS)
C. S., Arvind; Mishra, Ritesh; Vishal, Kumar; Gundimeda, Venugopal
2018-04-01
In this paper, we are presenting a robust and real-time, vision-based approach to detect speed breaker in urban environments for autonomous vehicle. Our method is designed to detect the speed breaker using visual inputs obtained from a camera mounted on top of a vehicle. The method performs inverse perspective mapping to generate top view of the road and segment out region of interest based on difference of Gaussian and median filter images. Furthermore, the algorithm performs RANSAC line fitting to identify the possible speed breaker candidate region. This initial guessed region via RANSAC, is validated using support vector machine. Our algorithm can detect different categories of speed breakers on cement, asphalt and interlock roads at various conditions and have achieved a recall of 0.98.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
NASA Astrophysics Data System (ADS)
Fornas, D.; Sales, J.; Peñalver, A.; Pérez, J.; Fernández, J. J.; Marín, R.; Sanz, P. J.
2016-03-01
This article presents research on the subject of autonomous underwater robot manipulation. Ongoing research in underwater robotics intends to increase the autonomy of intervention operations that require physical interaction in order to achieve social benefits in fields such as archaeology or biology that cannot afford the expenses of costly underwater operations using remote operated vehicles. Autonomous grasping is still a very challenging skill, especially in underwater environments, with highly unstructured scenarios, limited availability of sensors and adverse conditions that affect the robot perception and control systems. To tackle these issues, we propose the use of vision and segmentation techniques that aim to improve the specification of grasping operations on underwater primitive shaped objects. Several sources of stereo information are used to gather 3D information in order to obtain a model of the object. Using a RANSAC segmentation algorithm, the model parameters are estimated and a set of feasible grasps are computed. This approach is validated in both simulated and real underwater scenarios.
Collaborating with Autonomous Agents
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Cross, Charles D.; Fan, Henry; Hempley, Lucas E.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc D.; Allen, B. Danette
2015-01-01
With the anticipated increase of small unmanned aircraft systems (sUAS) entering into the National Airspace System, it is highly likely that vehicle operators will be teaming with fleets of small autonomous vehicles. The small vehicles may consist of sUAS, which are 55 pounds or less that typically will y at altitudes 400 feet and below, and small ground vehicles typically operating in buildings or defined small campuses. Typically, the vehicle operators are not concerned with manual control of the vehicle; instead they are concerned with the overall mission. In order for this vision of high-level mission operators working with fleets of vehicles to come to fruition, many human factors related challenges must be investigated and solved. First, the interface between the human operator and the autonomous agent must be at a level that the operator needs and the agents can understand. This paper details the natural language human factors e orts that NASA Langley's Autonomy Incubator is focusing on. In particular these e orts focus on allowing the operator to interact with the system using speech and gestures rather than a mouse and keyboard. With this ability of the system to understand both speech and gestures, operators not familiar with the vehicle dynamics will be able to easily plan, initiate, and change missions using a language familiar to them rather than having to learn and converse in the vehicle's language. This will foster better teaming between the operator and the autonomous agent which will help lower workload, increase situation awareness, and improve performance of the system as a whole.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.
2002-06-01
A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
Driving into the future: how imaging technology is shaping the future of cars
NASA Astrophysics Data System (ADS)
Zhang, Buyue
2015-03-01
Fueled by the development of advanced driver assistance system (ADAS), autonomous vehicles, and the proliferation of cameras and sensors, automotive is becoming a rich new domain for innovations in imaging technology. This paper presents an overview of ADAS, the important imaging and computer vision problems to solve for automotive, and examples of how some of these problems are solved, through which we highlight the challenges and opportunities in the automotive imaging space.
Bringing UAVs to the fight: recent army autonomy research and a vision for the future
NASA Astrophysics Data System (ADS)
Moorthy, Jay; Higgins, Raymond; Arthur, Keith
2008-04-01
The Unmanned Autonomous Collaborative Operations (UACO) program was initiated in recognition of the high operational burden associated with utilizing unmanned systems by both mounted and dismounted, ground and airborne warfighters. The program was previously introduced at the 62nd Annual Forum of the American Helicopter Society in May of 20061. This paper presents the three technical approaches taken and results obtained in UACO. All three approaches were validated extensively in contractor simulations, two were validated in government simulation, one was flight tested outside the UACO program, and one was flight tested in Part 2 of UACO. Results and recommendations are discussed regarding diverse areas such as user training and human-machine interface, workload distribution, UAV flight safety, data link bandwidth, user interface constructs, adaptive algorithms, air vehicle system integration, and target recognition. Finally, a vision for UAV As A Wingman is presented.
Development of dog-like retrieving capability in a ground robot
NASA Astrophysics Data System (ADS)
MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary
2013-01-01
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
Reconfigurable assembly work station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yhu-Tin; Abell, Jeffrey A.; Spicer, John Patrick
A reconfigurable autonomous workstation includes a multi-faced superstructure including a horizontally-arranged frame section supported on a plurality of posts. The posts form a plurality of vertical faces arranged between adjacent pairs of the posts, the faces including first and second faces and a power distribution and position reference face. A controllable robotic arm suspends from the rectangular frame section, and a work table fixedly couples to the power distribution and position reference face. A plurality of conveyor tables are fixedly coupled to the work table including a first conveyor table through the first face and a second conveyor table throughmore » the second face. A vision system monitors the work table and each of the conveyor tables. A programmable controller monitors signal inputs from the vision system to identify and determine orientation of the component on the first conveyor table and control the robotic arm to execute an assembly task.« less
NASA Astrophysics Data System (ADS)
Shao, Yanhua; Mei, Yanying; Chu, Hongyu; Chang, Zhiyuan; He, Yuxuan; Zhan, Huayi
2018-04-01
Pedestrian detection (PD) is an important application domain in computer vision and pattern recognition. Unmanned Aerial Vehicles (UAVs) have become a major field of research in recent years. In this paper, an algorithm for a robust pedestrian detection method based on the combination of the infrared HOG (IR-HOG) feature and SVM is proposed for highly complex outdoor scenarios on the basis of airborne IR image sequences from UAV. The basic flow of our application operation is as follows. Firstly, the thermal infrared imager (TAU2-336), which was installed on our Outdoor Autonomous Searching (OAS) UAV, is used for taking pictures of the designated outdoor area. Secondly, image sequences collecting and processing were accomplished by using high-performance embedded system with Samsung ODROID-XU4 and Ubuntu as the core and operating system respectively, and IR-HOG features were extracted. Finally, the SVM is used to train the pedestrian classifier. Experiment show that, our method shows promising results under complex conditions including strong noise corruption, partial occlusion etc.
Rendezvous and Docking for Space Exploration
NASA Technical Reports Server (NTRS)
Machula, M. F.; Crain, T.; Sandhoo, G. S.
2005-01-01
To achieve the exploration goals, new approaches to exploration are being envisioned that include robotic networks, modular systems, pre-positioned propellants and in-space assembly in Earth orbit, Lunar orbit and other locations around the cosmos. A fundamental requirement for rendezvous and docking to accomplish in-space assembly exists in each of these locations. While existing systems and technologies can accomplish rendezvous and docking in low earth orbit, and rendezvous and docking with crewed systems has been successfully accomplished in low lunar orbit, our capability must extend toward autonomous rendezvous and docking. To meet the needs of the exploration vision in-space assembly requiring both crewed and uncrewed vehicles will be an integral part of the exploration architecture. This paper focuses on the intelligent application of autonomous rendezvous and docking technologies to meet the needs of that architecture. It also describes key technology investments that will increase the exploration program's ability to ensure mission success, regardless of whether the rendezvous are fully automated or have humans in the loop.
Autonomous Vision Navigation for Spacecraft in Lunar Orbit
NASA Astrophysics Data System (ADS)
Bader, Nolan A.
NASA aims to achieve unprecedented navigational reliability for the first manned lunar mission of the Orion spacecraft in 2023. A technique for accomplishing this is to integrate autonomous feature tracking as an added means of improving position and velocity estimation. In this thesis, a template matching algorithm and optical sensor are tested onboard three simulated lunar trajectories using linear covariance techniques under various conditions. A preliminary characterization of the camera gives insight into its ability to determine azimuth and elevation angles to points on the surface of the Moon. A navigation performance analysis shows that an optical camera sensor can aid in decreasing position and velocity errors, particularly in a loss of communication scenario. Furthermore, it is found that camera quality and computational capability are driving factors affecting the performance of such a system.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
The flight telerobotic servicer and technology transfer
NASA Technical Reports Server (NTRS)
Andary, James F.; Bradford, Kayland Z.
1991-01-01
The Flight Telerobotic Servicer (FTS) project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station Freedom (SSF). The FTS will provide a telerobotic capability in the early phases of the SSF program and will be employed for assembly, maintenance, and inspection applications. The current state of space technology and the general nature of the FTS tasks dictate that the FTS be designed with sophisticated teleoperational capabilities for its internal primary operating mode. However, technologies such as advanced computer vision and autonomous planning techniques would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Another objective of the FTS program is to accelerate technology transfer from research to U.S. industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
Position estimation and driving of an autonomous vehicle by monocular vision
NASA Astrophysics Data System (ADS)
Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.
2007-04-01
Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.
The Role of X-Rays in Future Space Navigation and Communication
NASA Technical Reports Server (NTRS)
Winternitz, Luke M. B.; Gendreau, Keith C.; Hasouneh, Monther A.; Mitchell, Jason W.; Fong, Wai H.; Lee, Wing-Tsz; Gavriil, Fotis; Arzoumanian, Zaven
2013-01-01
In the near future, applications using X-rays will enable autonomous navigation and time distribution throughout the solar system, high capacity and low-power space data links, highly accurate attitude sensing, and extremely high-precision formation flying capabilities. Each of these applications alone has the potential to revolutionize mission capabilities, particularly beyond Earth orbit. This paper will outline the NASA Goddard Space Flight Center vision and efforts toward realizing the full potential of X-ray navigation and communications.
The 4-D approach to visual control of autonomous systems
NASA Technical Reports Server (NTRS)
Dickmanns, Ernst D.
1994-01-01
Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.
Advanced control architecture for autonomous vehicles
NASA Astrophysics Data System (ADS)
Maurer, Markus; Dickmanns, Ernst D.
1997-06-01
An advanced control architecture for autonomous vehicles is presented. The hierarchical architecture consists of four levels: a vehicle level, a control level, a rule-based level and a knowledge-based level. A special focus is on forms of internal representation, which have to be chosen adequately for each level. The control scheme is applied to VaMP, a Mercedes passenger car which autonomously performs missions on German freeways. VaMP perceives the environment with its sense of vision and conventional sensors. It controls its actuators for locomotion and attention focusing. Modules for perception, cognition and action are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.
1987-03-01
This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.
A low-cost test-bed for real-time landmark tracking
NASA Astrophysics Data System (ADS)
Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher
2007-04-01
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
NASA Technical Reports Server (NTRS)
Milenkovic, Zoran; DSouza, Christopher; Huish, David; Bendle, John; Kibler, Angela
2012-01-01
The exploration goals of Orion / MPCV Project will require a mature Rendezvous, Proximity Operations and Docking (RPOD) capability. Ground testing autonomous docking with a next-generation sensor such as the Vision Navigation Sensor (VNS) is a critical step along the path of ensuring successful execution of autonomous RPOD for Orion. This paper will discuss the testing rationale, the test configuration, the test limitations and the results obtained from tests that have been performed at the Lockheed Martin Space Operations Simulation Center (SOSC) to evaluate and mature the Orion RPOD system. We will show that these tests have greatly increased the confidence in the maturity of the Orion RPOD design, reduced some of the latent risks and in doing so validated the design philosophy of the Orion RPOD system. This paper is organized as follows: first, the objectives of the test are given. Descriptions of the SOSC facility, and the Orion RPOD system and associated components follow. The details of the test configuration of the components in question are presented prior to discussing preliminary results of the tests. The paper concludes with closing comments.
Systems and Methods for Automated Water Detection Using Visible Sensors
NASA Technical Reports Server (NTRS)
Rankin, Arturo L. (Inventor); Matthies, Larry H. (Inventor); Bellutta, Paolo (Inventor)
2016-01-01
Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.
New Concepts and Perspectives on Micro-Rotorcraft and Small Autonomous Rotary-Wing Vehicles
NASA Technical Reports Server (NTRS)
Young, Larry A.; Aiken, E. W.; Johnson, J. L.; Demblewski, R.; Andrews, J.; Aiken, Irwin W. (Technical Monitor)
2001-01-01
A key part of the strategic vision for rotorcraft research as identified by senior technologists within the Army/NASA Rotorcraft Division at NASA Ames Research Center is the development and use of small autonomous rotorcraft. Small autonomous rotorcraft are defined for the purposes of this paper to be a class of vehicles that range in size from rotary-wing micro air vehicles (MAVs) to larger, more conventionally sized, rotorcraft uninhabited aerial vehicles (UAVs) - i.e. vehicle gross weights ranging from hundreds of grams to thousands of kilograms. The development of small autonomous rotorcraft represents both a technology challenge and a potential new vehicle class that will have substantial societal impact for: national security, personal transport, planetary science, and public service.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
Hybrid vision activities at NASA Johnson Space Center
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1990-01-01
NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.
Clayton, Byron C.
2015-01-01
Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives. PMID:25610406
Clayton, Byron C
2014-01-01
Successful corporate acquisitions require its managers to achieve substantial performance improvements in order to sufficiently cover acquisition premiums, the expected return of debt and equity investors, and the additional resources needed to capture synergies and accelerate growth. Acquirers understand that achieving the performance improvements necessary to cover these costs and create value for investors will most likely require a significant effort from mergers and acquisitions (M&A) management teams. This understanding drives the common and longstanding practice of offering hefty performance incentive packages to key managers, assuming that financial incentives will induce in-role and extra-role behaviors that drive organizational change and growth. The present study debunks the assumptions of this common M&A practice, providing quantitative evidence that shared vision and autonomous motivation are far more effective drivers of managerial performance than financial incentives.
Distributed Scene Analysis For Autonomous Road Vehicle Guidance
NASA Astrophysics Data System (ADS)
Mysliwetz, Birger D.; Dickmanns, E. D.
1987-01-01
An efficient distributed processing scheme has been developed for visual road boundary tracking by 'VaMoRs', a testbed vehicle for autonomous mobility and computer vision. Ongoing work described here is directed to improving the robustness of the road boundary detection process in the presence of shadows, ill-defined edges and other disturbing real world effects. The system structure and the techniques applied for real-time scene analysis are presented along with experimental results. All subfunctions of road boundary detection for vehicle guidance, such as edge extraction, feature aggregation and camera pointing control, are executed in parallel by an onboard multiprocessor system. On the image processing level local oriented edge extraction is performed in multiple 'windows', tighly controlled from a hierarchically higher, modelbased level. The interpretation process involving a geometric road model and the observer's relative position to the road boundaries is capable of coping with ambiguity in measurement data. By using only selected measurements to update the model parameters even high noise levels can be dealt with and misleading edges be rejected.
Computer vision for driver assistance systems
NASA Astrophysics Data System (ADS)
Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner
1998-07-01
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Autonomous Robotic Inspection in Tunnels
NASA Astrophysics Data System (ADS)
Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.
2016-06-01
In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.
Low Gravity Materials Science Research for Space Exploration
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Semmes, Edmund B.; Schlagheck, Ronald A.; Bassler, Julie A.; Cook, Mary Beth; Wargo, Michael J.; Sanders, Gerald B.; Marzwell, Neville I.
2004-01-01
On January 14, 2004, the President of the United States announced a new vision for the United States civil space program. The Administrator of the National Aeronautics and Space Administration (NASA) has the responsibility to implement this new vision. The President also created a Presidential Commission 'to obtain recommendations concerning implementation of the new vision for space exploration.' The President's Commission recognized that achieving the exploration objectives would require significant technical innovation, research, and development in focal areas defined as 'enabling technologies.' Among the 17 enabling technologies identified for initial focus were advanced structures; advanced power and propulsion; closed-loop life support and habitability; extravehicular activity system; autonomous systems and robotics; scientific data collection and analysis; biomedical risk mitigation; and planetary in situ resource utilization. The Commission also recommended realignment of NASA Headquarters organizations to support the vision for space exploration. NASA has aggressively responded in its planning to support the vision for space exploration and with the current considerations of the findings and recommendations from the Presidential Commission. This presentation will examine the transformation and realignment activities to support the vision for space exploration that are underway in the microgravity materials science program. The heritage of the microgravity materials science program, in the context of residence within the organizational structure of the Office of Biological and Physical Research, and thematic and sub-discipline based research content areas, will be briefly examined as the starting point for the ongoing transformation. Overviews of future research directions will be presented and the status of organizational restructuring at NASA Headquarters, with respect to influences on the microgravity materials science program, will be discussed. Additional information is included in the original extended abstract.
Computer vision techniques for rotorcraft low-altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Cheng, Victor H. L.
1988-01-01
A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.
Simultaneous Deep-Ocean Operations With Autonomous and Remotely Operated Vehicles
NASA Astrophysics Data System (ADS)
Yoerger, D. R.; Bowen, A. D.; Bradley, A. M.
2005-12-01
The complimentary capabilities of autonomous and remotely vehicles can be obtained more efficiently if two or more vehicles can be deployed simultaneously from a single vessel. Simultaneous operations make better use of ship time and personnel. However, such operations require specific technical capabilities and careful scheduling. We recently demonstrated several key capabilities on the VISIONS05 cruise to the Juan de Fuca Ridge, where the Autonomous Benthic Explorer (ABE) and the ROV Jason 2 were operated simultaneously. The cruise featured complex ROV operations ranging from servicing seismic instruments, water sampling, drilling, and installation of in-situ experiments. The AUV provided detailed near-bottom bathymetry of the Endeavour segment while concurrently providing a cable route survey for a primary Canadian Neptune node. To meet these goals, we had to operate both vehicles at the same time. In previous efforts, we have operated ABE in a coordinated fashion with either the submersible Alvin or Jason 2. But the vehicles were either deployed sequentially or they were operated in separate acoustic transponder nets with the restriction that the vessel recover the AUV within a reasonable period after it reached the surface to avoid loss of the AUV. During the VISIONS05 cruise, we operated both vehicles at the same time and demonstrated several key capabilities to make simultaneous operations more efficient. These include the ability of the AUV to anchor to the seafloor after its batteries were expended or if a fault occurred, allowing complex ROV operations to run to completion without the constraint of retrieving the AUV at a specific time. The anchoring system allowed the vehicle to rest near the seafloor on a short mooring in a low power state. The AUV returned to the surface either through an acoustic command from the vessel or when a preassigned time was reached. We also tested an experimental acoustic beacon system that can allow multiple vehicles to determine their position without interfering with each other.
Autonomous vision-based navigation for proximity operations around binary asteroids
NASA Astrophysics Data System (ADS)
Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo
2018-02-01
Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.
Autonomous vision-based navigation for proximity operations around binary asteroids
NASA Astrophysics Data System (ADS)
Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo
2018-06-01
Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.
Toward increased autonomy in the surgical OR: needs, requests, and expectations.
Kranzfelder, Michael; Staub, Christoph; Fiolka, Adam; Schneider, Armin; Gillen, Sonja; Wilhelm, Dirk; Friess, Helmut; Knoll, Alois; Feussner, Hubertus
2013-05-01
The current trend in surgery toward further trauma reduction inevitably leads to increased technological complexity. It must be assumed that this situation will not stay under the sole control of surgeons; mechanical systems will assist them. Certain segments of the work flow will likely have to be taken over by a machine in an automatized or autonomous mode. In addition to the analysis of our own surgical practice, a literature search of the Medline database was performed to identify important aspects, methods, and technologies for increased operating room (OR) autonomy. Robotic surgical systems can help to increase OR autonomy by camera control, application of intelligent instruments, and even accomplishment of automated surgical procedures. However, the important step from simple task execution to autonomous decision making is difficult to realize. Another important aspect is the adaption of the general technical OR environment. This includes adaptive OR setting and context-adaptive interfaces, automated tool arrangement, and optimal visualization. Finally, integration of peri- and intraoperative data consisting of electronic patient record, OR documentation and logistics, medical imaging, and patient surveillance data could increase autonomy. To gain autonomy in the OR, a variety of assistance systems and methodologies need to be incorporated that endorse the surgeon autonomously as a first step toward the vision of cognitive surgery. Thus, we require establishment of model-based surgery and integration of procedural tasks. Structured knowledge is therefore indispensable.
NASA Technical Reports Server (NTRS)
Goforth, Andre
1987-01-01
The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reasoning. The third issue is hardware that supports symbolic processing in conjunction with standard data I/O and numeric processing. A SSP that currently is seen to be technologically feasible and is being developed is described and used as a baseline in the design concept.
Autonomous Vision-Based Tethered-Assisted Rover Docking
NASA Technical Reports Server (NTRS)
Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri
2013-01-01
Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.
MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring
NASA Astrophysics Data System (ADS)
Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio
1995-01-01
In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.
Intelligent Vision Systems Independent Research and Development (IR&D) 2006
NASA Technical Reports Server (NTRS)
Patrick, Clinton; Chavis, Katherine
2006-01-01
This report summarizes results in conduct of research sponsored by the 2006 Independent Research and Development (IR&D) program at Marshall Space Flight Center (MSFC) at Redstone Arsenal, Alabama. The focus of this IR&D is neural network (NN) technology provided by Imagination Engines, Incorporated (IEI) of St. Louis, Missouri. The technology already has many commercial, military, and governmental applications, and a rapidly growing list of other potential spin-offs. The goal for this IR&D is implementation and demonstration of the technology for autonomous robotic operations, first in software and ultimately in one or more hardware realizations. Testing is targeted specifically to the MSFC Flat Floor, but may also include other robotic platforms at MSFC, as time and funds permit. For the purpose of this report, the NN technology will be referred to by IEI's designation for a subset configuration of its patented technology suite: Self-Training Autonomous Neural Network Object (STANNO).
Mars rover sample return: An exobiology science scenario
NASA Technical Reports Server (NTRS)
Rosenthal, D. A.; Sims, M. H.; Schwartz, Deborah E.; Nedell, S. S.; Mckay, Christopher P.; Mancinelli, Rocco L.
1988-01-01
A mission designed to collect and return samples from Mars will provide information regarding its composition, history, and evolution. At the same time, a sample return mission generates a technical challenge. Sophisticated, semi-autonomous, robotic spacecraft systems must be developed in order to carry out complex operations at the surface of a very distant planet. An interdisciplinary effort was conducted to consider how much a Mars mission can be realistically structured to maximize the planetary science return. The focus was to concentrate on a particular set of scientific objectives (exobiology), to determine the instrumentation and analyses required to search for biological signatures, and to evaluate what analyses and decision making can be effectively performed by the rover in order to minimize the overhead of constant communication between Mars and the Earth. Investigations were also begun in the area of machine vision to determine whether layered sedimentary structures can be recognized autonomously, and preliminary results are encouraging.
Real-time adaptive off-road vehicle navigation and terrain classification
NASA Astrophysics Data System (ADS)
Muller, Urs A.; Jackel, Lawrence D.; LeCun, Yann; Flepp, Beat
2013-05-01
We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the autonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.
Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor
NASA Astrophysics Data System (ADS)
Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick
This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.
Deng, Fucheng; Zhu, Xiaorui; He, Chao
2017-09-13
Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.
Experience of the ARGO autonomous vehicle
NASA Astrophysics Data System (ADS)
Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra
1998-07-01
This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.
2005-01-01
The new U.S. National Vision for Space Exploration requires many new enabling technologies to accomplish the goals of space commercialization and returning humans to the moon and extraterrestrial environments. Traditionally, flight elements are complete subsystems requiring humans to complete the integration and assembly. These bulky structures also require the use of heavy launch vehicles to send the units to a desired location. This philosophy necessitates a high degree of safety, numerous space walks at a significant cost. Future space mission costs must be reduced and safety increased to reasonably achieve exploration goals. One proposed concept is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly. Assembly is autonomously performed when two components join after determining that specifications are correct. Local sensors continue monitor joint integrity post assembly, which is critical for safety and structural reliability. Achieving this concept requires a change in space structure design philosophy and the development of innovative technologies to perform autonomous assembly. Assembly of large space structures will require significant numbers of integrity sensors. Thus simple, low-cost sensors are integral to the success of this concept. This paper addresses these issues and proposes a novel concept for assembling space structures autonomously. Core technologies required to achieve in space assembly are presented. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Additionally, these novel technologies can be applied to other systems both on earth and extraterrestrial environments.
[Quality of life of visually impaired adults after low-vision intervention: a pilot study].
Fintz, A-C; Gottenkiene, S; Speeg-Schatz, C
2011-10-01
To demonstrate the benefits of a low-vision intervention upon the quality of life of visually disabled adults. The survey was proposed to patients who sought a low-vision intervention at the Colmar and Strasbourg hospital centres over a period of 9 months. Patients in agreement with the survey were asked to complete the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ25) in interview format by telephone, once they had attended the first meeting and again 2 months after the end of the low-vision intervention. The low-vision intervention led to overall improvement as judged by the 25 items of the questionnaire. Some items involving visual function and psychological issues showed significant benefits: the patients reported a more optimistic score concerning their general vision, described better nearby activities, and felt a bit more autonomous. More than mainstream psychological counselling, low-vision services help patients cope with visual disabilities during their daily life. The low-vision intervention improves physical and technical issues necessary to retaining autonomy in daily life. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Vision-based sensing for autonomous in-flight refueling
NASA Astrophysics Data System (ADS)
Scott, D.; Toal, M.; Dale, J.
2007-04-01
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
Attenuating Stereo Pixel-Locking via Affine Window Adaptation
NASA Technical Reports Server (NTRS)
Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.
2006-01-01
For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.
Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information
Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio
2013-01-01
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280
Human detection from a mobile robot using fusion of laser and vision information.
Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio
2013-09-04
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.
Reinforcement Learning with Autonomous Small Unmanned Aerial Vehicles in Cluttered Environments
NASA Technical Reports Server (NTRS)
Tran, Loc; Cross, Charles; Montague, Gilbert; Motter, Mark; Neilan, James; Qualls, Garry; Rothhaar, Paul; Trujillo, Anna; Allen, B. Danette
2015-01-01
We present ongoing work in the Autonomy Incubator at NASA Langley Research Center (LaRC) exploring the efficacy of a data set aggregation approach to reinforcement learning for small unmanned aerial vehicle (sUAV) flight in dense and cluttered environments with reactive obstacle avoidance. The goal is to learn an autonomous flight model using training experiences from a human piloting a sUAV around static obstacles. The training approach uses video data from a forward-facing camera that records the human pilot's flight. Various computer vision based features are extracted from the video relating to edge and gradient information. The recorded human-controlled inputs are used to train an autonomous control model that correlates the extracted feature vector to a yaw command. As part of the reinforcement learning approach, the autonomous control model is iteratively updated with feedback from a human agent who corrects undesired model output. This data driven approach to autonomous obstacle avoidance is explored for simulated forest environments furthering autonomous flight under the tree canopy research. This enables flight in previously inaccessible environments which are of interest to NASA researchers in Earth and Atmospheric sciences.
Semi-autonomous parking for enhanced safety and efficiency.
DOT National Transportation Integrated Search
2017-06-01
This project focuses on the use of tools from a combination of computer vision and localization based navigation schemes to aid the process of efficient and safe parking of vehicles in high density parking spaces. The principles of collision avoidanc...
The head and neck discomfort of autonomic failure: an unrecognized aetiology of headache
NASA Technical Reports Server (NTRS)
Robertson, D.; Kincaid, D. W.; Haile, V.; Robertson, R. M.
1994-01-01
Information concerning the frequency, severity, character, location, duration, diurnal pattern of headache and ancillary symptoms were obtained in 25 patients with autonomic failure and 44 control subjects. Precipitating and ameliorating factors were identified. Autonomic failure patients had more head and neck discomfort than controls. Their discomfort was much more likely to localize in the occiput, nape of the neck and shoulder, compared with controls. There was a greater tendency for the discomfort to occur in the morning and after meals. It was sometimes less than 5 min in duration and was often associated with dimming, blurring, or tunnelling of vision. It was provoked by upright posture and relieved by lying down. Patients with severe autonomic failure and orthostatic hypotension often present with a posture-dependent headache or neck pain. Because the relationship of these symptoms to posture is often not recognized, the fact that these findings may signal an underlying autonomic disorder is underappreciated, and the opportunity to consider this aetiology for the headache may be missed.
Parametric dense stereovision implementation on a system-on chip (SoC).
Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L
2012-01-01
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.
The IBM HeadTracking Pointer: improvements in vision-based pointer control.
Kjeldsen, Rick
2008-07-01
Vision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices--the intended audience of such systems--have no alternative if the automatic bootstrapping process fails. There is room for improvement in face tracking, and the pointer movement dynamics do not support accurate and efficient pointing. This paper describes the IBM HeadTracking Pointer, a system which attempts to directly address some of these issues. Head gestures are used to provide the end user a greater level of autonomous control over the system. A novel face-tracking algorithm reduces drift under variable lighting conditions, allowing the use of absolute, rather than relative, pointer positioning. Most importantly, the pointer dynamics have been designed to take into account the constraints of head-based pointing, with a non-linear gain which allows stability in fine pointer movement, high speed on long transitions and adjustability to support users with different movement dynamics. User studies have identified some difficulties with training the system and some characteristics of the pointer motion that take time to get used to, but also good user feedback and very promising performance results.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Instrumentation and Control Needs for Reliable Operation of Lunar Base Surface Nuclear Power Systems
NASA Technical Reports Server (NTRS)
Turso, James; Chicatelli, Amy; Bajwa, Anupa
2005-01-01
As one of the near-term goals of the President's Vision for Space Exploration, establishment of a multi-person lunar base will require high-endurance power systems which are independent of the sun, and can operate without replenishment for several years. These requirements may be obtained using nuclear power systems specifically designed for use on the lunar surface. While it is envisioned that such a system will generally be supervised by humans, some of the evolutions required maybe semi or fully autonomous. The entire base complement for near-term missions may be less than 10 individuals, most or all of which may not be qualified nuclear plant operators and may be off-base for extended periods thus, the need for power system autonomous operation. Startup, shutdown, and load following operations will require the application of advanced control and health management strategies with an emphasis on robust, supervisory, coordinated control of, for example, the nuclear heat source, energy conversion plant (e.g., Brayton Energy Conversion units), and power management system. Autonomous operation implies that, in addition to being capable of automatic response to disturbance input or load changes, the system is also capable of assessing the status of the integrated plant, determining the risk associated with the possible actions, and making a decision as to the action that optimizes system performance while minimizing risk to the mission. Adapting the control to deviations from design conditions and degradation due to component failures will be essential to ensure base inhabitant safety and mission success. Intelligent decisions will have to be made to choose the right set of sensors to provide the data needed to do condition monitoring and fault detection and isolation because of liftoff weight and space limitations, it will not be possible to have an extensive set of instruments as used for earth-based systems. Advanced instrumentation and control technologies will be needed to enable this critical functionality of autonomous operation. It will be imperative to consider instrumentation and control requirements in parallel to system configuration development so as to identify control-related, as well as integrated system-related, problem areas early to avoid potentially expensive work-arounds . This paper presents an overview of the enabling technologies necessary for the development of reliable, autonomous lunar base nuclear power systems with an emphasis on system architectures and off-the-shelf algorithms rather than hardware. Autonomy needs are presented in the context of a hypothetical lunar base nuclear power system. The scenarios and applications presented are hypothetical in nature, based on information from open-literature sources, and only intended to provoke thought and provide motivation for the use of autonomous, intelligent control and diagnostics.
Open-Loop Flight Testing of COBALT Navigation and Sensor Technologies for Precise Soft Landing
NASA Technical Reports Server (NTRS)
Carson, John M., III; Restrepo, Caroline I.; Seubert, Carl R.; Amzajerdian, Farzin; Pierrottet, Diego F.; Collins, Steven M.; O'Neal, Travis V.; Stelling, Richard
2017-01-01
An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) payload was conducted onboard the Masten Xodiac suborbital rocket testbed. The payload integrates two complementary sensor technologies that together provide a spacecraft with knowledge during planetary descent and landing to precisely navigate and softly touchdown in close proximity to targeted surface locations. The two technologies are the Navigation Doppler Lidar (NDL), for high-precision velocity and range measurements, and the Lander Vision System (LVS) for map-relative state esti- mates. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a very precise Terrain Relative Navigation (TRN) solution that is suitable for future, autonomous planetary landing systems that require precise and soft landing capabilities. During the open-loop flight campaign, the COBALT payload acquired measurements and generated a precise navigation solution, but the Xodiac vehicle planned and executed its maneuvers based on an independent, GPS-based navigation solution. This minimized the risk to the vehicle during the integration and testing of the new navigation sensing technologies within the COBALT payload.
Techniques and potential capabilities of multi-resolutional information (knowledge) processing
NASA Technical Reports Server (NTRS)
Meystel, A.
1989-01-01
A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.
A Petri-net coordination model for an intelligent mobile robot
NASA Technical Reports Server (NTRS)
Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.
1990-01-01
The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.
DOT National Transportation Integrated Search
2017-12-01
Visions of self-driving vehicles abound in popular science and entertainment. Many programs are at work to make a reality catch of this imagination. Vehicle automation has progressed rapidly in recent years, from simple driver assistance technologies...
JPL Robotics Technology Applicable to Agriculture
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol Gabriel; Kyte, L.
2008-01-01
This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,
Cataract surgical coverage and outcome in the Tibet Autonomous Region of China
Bassett, K L; Noertjojo, K; Liu, L; Wang, F S; Tenzing, C; Wilkie, A; Santangelo, M; Courtright, P
2005-01-01
Background: A recently published, population based survey of the Tibet Autonomous Region (TAR) of China reported on low vision, blindness, and blinding conditions. This paper presents detailed findings from that survey regarding cataract, including prevalence, cataract surgical coverage, surgical outcome, and barriers to use of services. Methods: The Tibet Eye Care Assessment (TECA) was a prevalence survey of people from randomly selected households from three of the seven provinces of the TAR (Lhoka, Nakchu, and Lingzhr), representing its three main environmental regions. The survey, conducted in 1999 and 2000, assessed visual acuity, cause of vision loss, and eye care services. Results: Among the 15 900 people enumerated, 12 644 were examined (79.6%). Cataract prevalence was 5.2% and 13.8%, for the total population, and those over age 50, respectively. Cataract surgical coverage (vision <6/60) for people age 50 and older (85–90% of cataract blind) was 56% overall, 70% for men and 47% for women. The most common barriers to use of cataract surgical services were distance and cost. In the 216 eyes with cataract surgery, 60% were aphakic and 40% were pseudophakic. Pseudophakic surgery left 19% of eyes blind (<6/60) and an additional 20% of eyes with poor vision (6/24–6/60). Aphakic surgery left 24% of eyes blind and an additional 21% of eyes with poor vision. Even though more women remained blind than men, 28% versus 18% respectively, the different was not statistically significant (p = 0.25). Conclusions: Cataract surgical coverage was remarkably high despite the difficulty of providing services to such an isolated and sparse population. Cataract surgical outcome was poor for both aphakic and pseudophakic surgery. Two main priorities are improving cataract surgical quality and cataract surgical coverage, particularly for women. PMID:15615736
Optical information processing at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Reid, Max B.; Bualat, Maria G.; Cho, Young C.; Downie, John D.; Gary, Charles K.; Ma, Paul W.; Ozcan, Meric; Pryor, Anna H.; Spirkovska, Lilly
1993-01-01
The combination of analog optical processors with digital electronic systems offers the potential of tera-OPS computational performance, while often requiring less power and weight relative to all-digital systems. NASA is working to develop and demonstrate optical processing techniques for on-board, real time science and mission applications. Current research areas and applications under investigation include optical matrix processing for space structure vibration control and the analysis of Space Shuttle Main Engine plume spectra, optical correlation-based autonomous vision for robotic vehicles, analog computation for robotic path planning, free-space optical interconnections for information transfer within digital electronic computers, and multiplexed arrays of fiber optic interferometric sensors for acoustic and vibration measurements.
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
Lenard, James; Welsh, Ruth; Danton, Russell
2018-06-01
The aim of this study was to describe the position of pedestrians and pedal cyclists relative to the striking vehicle in the 3 s before impact. This information is essential for the development of effective autonomous emergency braking systems and relevant test conditions for consumer ratings. The UK RAIDS-OTS study provided 175 pedestrian and 127 pedal-cycle cases based on in-depth, at-scene investigations of a representative sample of accidents in 2000-2010. Pedal cyclists were scattered laterally more widely than pedestrians (90% of cyclists within around ±80° compared to ±20° for pedestrians), however their distance from the striking vehicle in the seconds before impact was no greater (90% of cyclists within 42 m at 3 s compared to 50 m for pedestrians). This data is consistent with a greater involvement of slow moving vehicles in cycle accidents. The implication of the results is that AEB systems for cyclists require almost complete 180° side-to-side vision but do not need a longer distance range than for pedestrians. Copyright © 2018 Elsevier Ltd. All rights reserved.
Current state of the art of vision based SLAM
NASA Astrophysics Data System (ADS)
Muhammad, Naveed; Fofi, David; Ainouz, Samia
2009-02-01
The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.
NASA Astrophysics Data System (ADS)
Kozłowski, S. K.; Sybilski, P. W.; Konacki, M.; Pawłaszek, R. K.; Ratajczak, M.; Hełminiak, K. G.; Litwicki, M.
2017-10-01
We present the design and commissioning of Project Solaris, a global network of autonomous observatories. Solaris is a Polish scientific undertaking aimed at the detection and characterization of circumbinary exoplanets and eclipsing binary stars. To accomplish this, a network of four fully autonomous observatories has been deployed in the Southern Hemisphere: Solaris-1 and Solaris-2 in the South African Astronomical Observatory in South Africa; Solaris-3 in Siding Spring Observatory in Australia; and Solaris-4 in Complejo Astronomico El Leoncito in Argentina. The four stations are nearly identical and are equipped with 0.5-m Ritchey-Crétien (f/15) or Cassegrain (f/9, Solaris-3) optics and high-grade 2 K × 2 K CCD cameras with Johnson and Sloan filter sets. We present the design and implementation of low-level security; data logging and notification systems; weather monitoring components; all-sky vision system, surveillance system; and distributed temperature and humidity sensors. We describe dedicated grounding and lighting protection system design and robust fiber data transfer interfaces in electrically demanding conditions. We discuss the outcomes of our design, as well as the resulting software engineering requirements. We describe our system’s engineering approach to achieve the required level of autonomy, the architecture of the custom high-level industry-grade software that has been designed and implemented specifically for the use of the network. We present the actual status of the project and first photometric results; these include data and models of already studied systems for benchmarking purposes (Wasp-4b, Wasp-64b, and Wasp-98b transits, PG 1663-018, an eclipsing binary with a pulsator) as well J024946-3825.6, an interesting low-mass binary system for which a complete model is provided for the first time.
Street Viewer: An Autonomous Vision Based Traffic Tracking System.
Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano
2016-06-03
The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.
An automated miniature robotic vehicle inspection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobie, Gordon; Summan, Rahul; MacLeod, Charles
2014-02-18
A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3Dmore » model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.« less
Higher-order neural network software for distortion invariant object recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Spirkovska, Lilly
1991-01-01
The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.
NASA Astrophysics Data System (ADS)
Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques
2005-06-01
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.
Autonomous execution of the Precision Immobilization Technique
NASA Astrophysics Data System (ADS)
Mascareñas, David D. L.; Stull, Christopher J.; Farrar, Charles R.
2017-03-01
Over the course of the last decade great advances have been made in autonomously driving cars. The technology has advanced to the point that driverless car technology is currently being tested on publicly accessed roadways. The introduction of these technologies onto publicly accessed roadways not only raises questions of safety, but also security. Autonomously driving cars are inherently cyber-physical systems and as such will have novel security vulnerabilities that couple both the cyber aspects of the vehicle including the on-board computing and any network data it makes use of, with the physical nature of the vehicle including its sensors, actuators, and the vehicle chassis. Widespread implementation of driverless car technology will require that both the cyber, as well as physical security concerns surrounding these vehicles are addressed. In this work, we specifically developed a control policy to autonomously execute the Precision Immobilization Technique, a.k.a. the PIT maneuver. The PIT maneuver was originally developed by law enforcement to end high-speed vehicular pursuits in a quasi-safe manner. However, there is still a risk of damage/roll-over to both the vehicle executing the PIT maneuver as well as to the vehicle subject to the PIT maneuver. In law enforcement applications, it would be preferable to execute the PIT maneuver using an autonomous vehicle, thus removing the danger to law-enforcement officers. Furthermore, it is entirely possible that unscrupulous individuals could inject code into an autonomously-driving car to use the PIT maneuver to immobilize other vehicles while maintaining anonymity. For these reasons it is useful to know how the PIT maneuver can be implemented on an autonomous car. In this work a simple control policy based on velocity pursuit was developed to autonomously execute the PIT maneuver using only a vision and range measurements that are both commonly collected by contemporary driverless cars. The ability of this control policy to execute the PIT maneuver was demonstrated both in simulation and experimentally. The results of this work can help inform the design of autonomous car with regards to ensuring their cyber-physical security.
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-01-01
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565
Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-12-23
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
Natural Models for Autonomous Control of Spatial Navigation, Sensing, and Guidance
2013-06-26
mantis shrimp, inspired largely by our efforts, can be found at: The Oatmeal - http://theoatmeal.com/ comics /mantis_shrimp. Courtesy of these guys...Ecology and Environmental Education (20-21 January, Tainan, Taiwan). 14 M How, NJ Marshall 2012 Polarisation vision, an unexplored channel for
ERIC Educational Resources Information Center
Doty, Keith L.
1999-01-01
Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles
Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger
2016-01-01
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.
Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger
2016-03-11
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.
Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis
NASA Astrophysics Data System (ADS)
Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario
2015-12-01
Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.
Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.
Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario
2015-12-01
Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.
NASA Technical Reports Server (NTRS)
Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.
2012-01-01
A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.
The Rise of Networks: How Decentralized Management Is Improving Schools
ERIC Educational Resources Information Center
Kelleher, Maureen
2014-01-01
School districts across the country are shifting away from their traditional management paradigm--a central office that directs its schools through uniform mandates and policies--toward a new vision where district leaders support autonomous schools while holding them accountable for student performance. The advent of new governance mechanisms…
Constructing Game Agents from Video of Human Behavior
2009-01-01
Future Work Constructing autonomous agents is an important task in video game development. Games such as Quake, Warcraft III, and Halo 2 (Damian 2005...Vision. Rio de Janeiro, Brazil: IEEE Press. Kelley, J. P.; Botea, A.; and Koenig, S. 2008. Offline planning with hierarchical task networks in video ...
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
COBALT CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M. III; Restrepo, Carolina I.; Robertson, Edward A.; Seubert, Carl R.; Amzajerdian, Farzin
2016-01-01
COBALT is a terrestrial test platform for development and maturation of GN&C (Guidance, Navigation and Control) technologies for PL&HA (Precision Landing and Hazard Avoidance). The project is developing a third generation, Langley Navigation Doppler Lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the JPL Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. These technologies together provide navigation that enables controlled precision landing. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive Vertical Test Bed (VTB) developed by Masten Space Systems (MSS), and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).
Neuro-inspired smart image sensor: analog Hmax implementation
NASA Astrophysics Data System (ADS)
Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman
2015-03-01
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Submillimeter bolt location in car bodywork for production line quality inspection
NASA Astrophysics Data System (ADS)
Altamirano-Robles, Leopoldo; Arias-Estrada, Miguel; Alviso-Quibrera, Samuel; Lopez-Lopez, Aurelio
2000-03-01
In the automotive industry, a vehicle begins with the construction of the vehicle floor. Later on, several robots weld a series of bolts to this floor which are used to fix other parts. Due to several problems, like welding tools wearing, robot miscalibration or momentary low power supply, among others, some bolts are incorrectly positioned or are not present at all, bringing problems and delays in the next work cells. Therefore, it is of importance to verify the quality of welded parts before the following assembly steps. A computer vision system is proposed in order to locate autonomously the presence and quality of the bolts. The system should carry on the inspection in real time at the car assembly line under the following conditions: without touching the bodywork, with a precision in the submillimeter range and in few seconds. In this paper we present a basic computer vision system for bolt location in the submillimeter range. We analyze three arrangements of the system components (camera and illumination sources) that produce different results in the localization. Results are presented and compared for the three approaches obtained under laboratory conditions. The algorithms were tested in the assembling line. Variations up to one millimeter in the welded position of the bolts were observed.
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance
NASA Technical Reports Server (NTRS)
Jones, Brandon M.
2005-01-01
Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.
Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network
2015-01-01
For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system. PMID:26089863
COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights
NASA Technical Reports Server (NTRS)
Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas
2016-01-01
The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
Egelhaaf, Martin; Kern, Roland
2002-12-01
Vision guides flight behaviour in numerous insects. Despite their small brain, insects easily outperform current man-made autonomous vehicles in many respects. Examples are the virtuosic chasing manoeuvres male flies perform as part of their mating behaviour and the ability of bees to assess, on the basis of visual motion cues, the distance travelled in a novel environment. Analyses at both the behavioural and neuronal levels are beginning to unveil reasons for such extraordinary capabilities of insects. One recipe for their success is the adaptation of visual information processing to the specific requirements of the behavioural tasks and to the specific spatiotemporal properties of the natural input.
A Reusable Design for Precision Lunar Landing Systems
NASA Technical Reports Server (NTRS)
Fuhrman, Linda; Brand, Timothy; Fill, Tom; Norris, Lee; Paschall, Steve
2005-01-01
The top-level architecture to accomplish NASA's Vision for Space Exploration is to use Lunar missions and systems not just as an end in themselves, but also as testbeds for the more ambitious goals of Human Mars Exploration (HME). This approach means that Lunar missions and systems are most likely going to be targeted for (Lunar) polar missions, and also for long-duration (months) surface stays. This overacting theme creates basic top-level requirements for any next-generation lander system: 1) Long duration stays: a) Multiple landers in close proximity; b) Pinpoint landings for "surface rendezvous"; c) Autonomous landing of pre-positioned assets; and d) Autonomous Hazard Detection and Avoidance. 2) Polar and deep-crater landings (dark); 3) Common/extensible systems for Moon and Mars, crew and cargo. These requirements pose challenging technology and capability needs. Compare and contrast: 4) Apollo: a) 1 km landing accuracy; b) Lunar near-side (well imaged and direct-to-Earth com. possible); c) Lunar equatorial (landing trajectories offer best navigation support from Earth); d) Limited lighting conditions; e) Significant ground-in-the-loop operations; 5) Lunar Access: a) 10-100m landing precision; b) "Anywhere" access includes polar (potentially poor nav. support from Earth) and far side (poor gravity and imaging; no direct-to-Earth com); c) "Anytime" access includes any lighting condition (including dark); d) Full autonomous landing capability; e) Extensible design for tele-operation or operator-in-the-loop; and f) Minimal ground support to reduce operations costs. The Lunar Access program objectives, therefore, are to: a) Develop a baseline Lunar Precision Landing System (PLS) design to enable pinpoint "anywhere, anytime" landings; b) landing precision 10m-100m; c) Any LAT, LON; and d) Any lighting condition; This paper will characterize basic features of the next generation Lunar landing system, including trajectory types, sensor suite options and a reference system architecture.
Large Scale Structure From Motion for Autonomous Underwater Vehicle Surveys
2004-09-01
Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press, 2001. [26] 0. Faugeras and S. Maybank . Motion from point...Machine Vision Conference, volume 1, pages 384-393, September 2002. [69] S. Maybank and 0. Faugeras. A theory of self-calibration of a moving camera
ERIC Educational Resources Information Center
Taylor, Affrica; Blaise, Mindy
2014-01-01
This paper sets out to queer education's normative human-centric assumptions and to de-centre the straight and narrow vision of the child as only ever becoming an autonomous individual learner. It re-focuses upon the more-than-human learning that takes place when we pay attention to queerer aspects of children's, as well as our own, entangled…
Information-Driven Autonomous Exploration for a Vision-Based Mav
NASA Astrophysics Data System (ADS)
Palazzolo, E.; Stachniss, C.
2017-08-01
Most micro aerial vehicles (MAV) are flown manually by a pilot. When it comes to autonomous exploration for MAVs equipped with cameras, we need a good exploration strategy for covering an unknown 3D environment in order to build an accurate map of the scene. In particular, the robot must select appropriate viewpoints to acquire informative measurements. In this paper, we present an approach that computes in real-time a smooth flight path with the exploration of a 3D environment using a vision-based MAV. We assume to know a bounding box of the object or building to explore and our approach iteratively computes the next best viewpoints using a utility function that considers the expected information gain of new measurements, the distance between viewpoints, and the smoothness of the flight trajectories. In addition, the algorithm takes into account the elapsed time of the exploration run to safely land the MAV at its starting point after a user specified time. We implemented our algorithm and our experiments suggest that it allows for a precise reconstruction of the 3D environment while guiding the robot smoothly through the scene.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Salz, James J.; Nesburn, Anthony B.
1997-05-01
Preliminary results of the correction of myopia up to -7.00 D by tracked photorefractive keratectomy (T-PRK) with a scanning and tracking excimer laser by Autonomous Technologies are discussed. 41 eyes participated (20 males). 28 eyes were evaluated one month postop. At epithelization day mean uncorrected vision was 20/45.3. At one month postop, 92.8 of eyes were 20/40 and 46.4% were 20/20. No eye was worse than 20/50. 75% of eyes were within +/- 0.5 D of emmetropia and 82% were within +/- 1.00 D of emmetropia. Eyes corrected for monovision were included. One eye lost 3 lines of best corrected vision, and had more than 1.00 D induced astigmatism due to a central corneal ulcer. Additional complications included symptomatic recurrent corneal erosions which were controlled with topical hypertonic saline. T-PRK appears to allow effective correction of low to moderate myopia. Further study will establish safety and efficacy of the procedure.
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.
Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun
2016-08-16
Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation
Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun
2016-01-01
Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888
NASA Astrophysics Data System (ADS)
Barnawi, Abdulwasa Bakr
Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of uncertainty and perform energy trading between the hybrid grid utility and main grid utility in addition to the designed uncertainty factor. After the generation unit planning was carried out and component sizing was determined, adequacy evaluation was conducted by calculating the loss of load expectation adequacy index for different contingency criteria considering probability of equipment failure. Finally, a microgrid planning was conducted by finding the proper size and location to install distributed generation units in a radial distribution network.
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
Examples of design and achievement of vision systems for mobile robotics applications
NASA Astrophysics Data System (ADS)
Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe
2000-10-01
Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.
Micro air vehicle autonomous obstacle avoidance from stereo-vision
NASA Astrophysics Data System (ADS)
Brockers, Roland; Kuwata, Yoshiaki; Weiss, Stephan; Matthies, Lawrence
2014-06-01
We introduce a new approach for on-board autonomous obstacle avoidance for micro air vehicles flying outdoors in close proximity to structure. Our approach uses inverse-range, polar-perspective stereo-disparity maps for obstacle detection and representation, and deploys a closed-loop RRT planner that considers flight dynamics for trajectory generation. While motion planning is executed in 3D space, we reduce collision checking to a fast z-buffer-like operation in disparity space, which allows for significant speed-up compared to full 3d methods. Evaluations in simulation illustrate the robustness of our approach, whereas real world flights under tree canopy demonstrate the potential of the approach.
Robonaut Mobile Autonomy: Initial Experiments
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.
2006-01-01
A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Close-Range Tracking of Underwater Vehicles Using Light Beacons
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David
2016-01-01
This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time. PMID:27023547
Close-Range Tracking of Underwater Vehicles Using Light Beacons.
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David
2016-03-25
This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time.
Stereo-vision-based terrain mapping for off-road autonomous navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-05-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-01-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Dental Occlusion and Ophthalmology: A Literature Review
Marchili, Nicola; Ortu, Eleonora; Pietropaoli, Davide; Cattaneo, Ruggero; Monaco, Annalisa
2016-01-01
Stomatognathic system is strictly correlated to other anatomical regions; many studies investigated relationship between temporomandibular joint and posture, several articles describe cranio-facial pain from dental causes, such as trigger points. Until now less interest has been given to connections between dental occlusion and ophthalmology, even if they are important and involving. Clinical experience in dental practice claims that mandibular latero-deviation is connected both to eye dominance and to defects of ocular convergence. The trigeminal nerve is the largest and most complex of the twelve cranial nerves. The trigeminal system represents the connection between somitic structures and those derived from the branchial arches, collecting the proprioception from both somitic structures and oculomotor muscles. The intermedius nucleus of the medulla is a small perihypoglossal brainstem nucleus, which acts to integrate information from the head and neck and relays it on to the nucleus of the solitary tract where autonomic responses are generated. This intriguing neurophysiological web led our research group to investigate anatomical and functional associations between dental occlusion and vision. In conclusion, nervous system and functional pathways strictly connect vision and dental occlusion, and in the future both dentists and oculists should be more and more aware of this correlation for a better diagnosis and therapy. PMID:27733873
Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.
Du, Xinxin; Tan, Kok Kiong
2016-05-01
Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.
Flight Testing of Terrain-Relative Navigation and Large-Divert Guidance on a VTVL Rocket
NASA Technical Reports Server (NTRS)
Trawny, Nikolas; Benito, Joel; Tweddle, Brent; Bergh, Charles F.; Khanoyan, Garen; Vaughan, Geoffrey M.; Zheng, Jason X.; Villalpando, Carlos Y.; Cheng, Yang; Scharf, Daniel P.;
2015-01-01
Since 2011, the Autonomous Descent and Ascent Powered-Flight Testbed (ADAPT) has been used to demonstrate advanced descent and landing technologies onboard the Masten Space Systems (MSS) Xombie vertical-takeoff, vertical-landing suborbital rocket. The current instantiation of ADAPT is a stand-alone payload comprising sensing and avionics for terrain-relative navigation and fuel-optimal onboard planning of large divert trajectories, thus providing complete pin-point landing capabilities needed for planetary landers. To this end, ADAPT combines two technologies developed at JPL, the Lander Vision System (LVS), and the Guidance for Fuel Optimal Large Diverts (G-FOLD) software. This paper describes the integration and testing of LVS and G-FOLD in the ADAPT payload, culminating in two successful free flight demonstrations on the Xombie vehicle conducted in December 2014.
NASA Astrophysics Data System (ADS)
McCain, Harry G.; Andary, James F.; Hewitt, Dennis R.; Haley, Dennis C.
The Flight Telerobotic Servicer (FTS) Project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station Freedom (SSF). The FTS will provide a telerobotic capability to the Freedom Station in the early assembly phases of the program and will be employed for assembly, maintenance, and inspection applications throughout the lifetime of the space station. Appropriately configured elements of the FTS will also be employed for robotic manipulation in remote satellite servicing applications and possibly the Lunar/Mars Program. In mid-1989, the FTS entered the flight system design and implementation phase (Phase C/D) of development with the signing of the FTS prime contract with Martin Marietta Astronautics Group in Denver, Colorado. The basic FTS design is now established and can be reported on in some detail. This paper will describe the FTS flight system design and the rationale for the specific design approaches and component selections. The current state of space technology and the general nature of the FTS task dictate that the FTS be designed with sophisticated teleoperation capabilities for its initial primary operating mode. However, there are technologies, such as advanced computer vision and autonomous planning techniques currently in research and advanced development phases which would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Therefore, a specific requirement on the initial FTS design is that it has the capability to evolve as new technology becomes available. This paper will describe the FTS design approach for evolution to more autonomous capabilities. Some specific task applications of the FTS and partial automation approaches of these tasks will also be discussed in this paper.
Development of a GPS/INS/MAG navigation system and waypoint navigator for a VTOL UAV
NASA Astrophysics Data System (ADS)
Meister, Oliver; Mönikes, Ralf; Wendel, Jan; Frietsch, Natalie; Schlaile, Christian; Trommer, Gert F.
2007-04-01
Unmanned aerial vehicles (UAV) can be used for versatile surveillance and reconnaissance missions. If a UAV is capable of flying automatically on a predefined path the range of possible applications is widened significantly. This paper addresses the development of the integrated GPS/INS/MAG navigation system and a waypoint navigator for a small vertical take-off and landing (VTOL) unmanned four-rotor helicopter with a take-off weight below 1 kg. The core of the navigation system consists of low cost inertial sensors which are continuously aided with GPS, magnetometer compass, and a barometric height information. Due to the fact, that the yaw angle becomes unobservable during hovering flight, the integration with a magnetic compass is mandatory. This integration must be robust with respect to errors caused by the terrestrial magnetic field deviation and interferences from surrounding electronic devices as well as ferrite metals. The described integration concept with a Kalman filter overcomes the problem that erroneous magnetic measurements yield to an attitude error in the roll and pitch axis. The algorithm provides long-term stable navigation information even during GPS outages which is mandatory for the flight control of the UAV. In the second part of the paper the guidance algorithms are discussed in detail. These algorithms allow the UAV to operate in a semi-autonomous mode position hold as well an complete autonomous waypoint mode. In the position hold mode the helicopter maintains its position regardless of wind disturbances which ease the pilot job during hold-and-stare missions. The autonomous waypoint navigator enable the flight outside the range of vision and beyond the range of the radio link. Flight test results of the implemented modes of operation are shown.
McCain, H G; Andary, J F; Hewitt, D R; Haley, D C
1991-01-01
The Flight Telerobotic Servicer (FTS) Project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station) Freedom (SSF). The FTS will provide a telerobotic capability to the Freedom Station in the early assembly phases of the program and will be employed for assembly, maintenance, and inspection applications throughout the lifetime of the space station. Appropriately configured elements of the FTS will also be employed for robotic manipulation in remote satellite servicing applications and possibly the Lunar/Mars Program. In mid-1989, the FTS entered the flight system design and implementation phase (Phase C/D) of development with the signing of the FTS prime contract with Martin Marietta Astronautics Group in Denver, Colorado. The basic FTS design is now established and can be reported on in some detail. This paper will describe the FTS flight system design and the rationale for the specific design approaches and component selections. The current state of space technology and the nature of the FTS task dictate that the FTS be designed with sophisticated teleoperation capabilities for its initial primary operating mode. However, there are technologies, such as advanced computer vision and autonomous planning techniques currently in research and advanced development phases which would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Therefore, a specific requirement on the initial FTS design is that it has the capability to evolve as new technology becomes available. This paper will describe the FTS design approach for evolution to more autonomous capabilities. Some specific task applications of the FTS and partial automation approaches of these tasks will also be discussed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kring, C.T.; Varma, V.K.; Jatko, W.B.
The US Army and Team Crusader (United Defense, Lockheed Martin Armament Systems, etc.) are developing the next generation howitzer, the Crusader. The development program includes an advanced, self-propelled liquid propellant howitzer and a companion resupply vehicle. The resupply vehicle is intended to rendezvous with the howitzer near the battlefront and replenish ammunition, fuel, and other material. The Army has recommended that Crusader incorporate new and innovative technologies to improve performance and safety. One conceptual design proposes a robotic resupply boom on the resupply vehicle to upload supplies to the howitzer. The resupply boom would normally be retracted inside the resupplymore » vehicle during transit. When the two vehicles are within range of the resupply boom, the boom would be extended to a receiving port on the howitzer. In order to reduce exposure to small arms fire or nuclear, biological, and chemical hazards, the crew would remain inside the resupply vehicle during the resupply operation. The process of extending the boom and linking with the receiving port is called docking. A boom operator would be designated to maneuver the boom into contact with the receiving port using a mechanical joystick. The docking operation depends greatly upon the skill of the boom operator to manipulate the boom into docking position. Computer simulations at the National Aeronautics and Space Administration have shown that computer-assisted or autonomous docking can improve the ability of the operator to dock safely and quickly. This document describes the present status of the Crusader Autonomous Docking System (CADS) implemented at Oak Ridge National laboratory (ORNL). The purpose of the CADS project is to determine the feasibility and performance limitations of vision systems to satisfy the autonomous docking requirements for Crusader and conduct a demonstration under controlled conditions.« less
NASA Technical Reports Server (NTRS)
McCain, H. G.; Andary, J. F.; Hewitt, D. R.; Haley, D. C.
1991-01-01
The Flight Telerobotic Servicer (FTS) Project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station) Freedom (SSF). The FTS will provide a telerobotic capability to the Freedom Station in the early assembly phases of the program and will be employed for assembly, maintenance, and inspection applications throughout the lifetime of the space station. Appropriately configured elements of the FTS will also be employed for robotic manipulation in remote satellite servicing applications and possibly the Lunar/Mars Program. In mid-1989, the FTS entered the flight system design and implementation phase (Phase C/D) of development with the signing of the FTS prime contract with Martin Marietta Astronautics Group in Denver, Colorado. The basic FTS design is now established and can be reported on in some detail. This paper will describe the FTS flight system design and the rationale for the specific design approaches and component selections. The current state of space technology and the nature of the FTS task dictate that the FTS be designed with sophisticated teleoperation capabilities for its initial primary operating mode. However, there are technologies, such as advanced computer vision and autonomous planning techniques currently in research and advanced development phases which would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Therefore, a specific requirement on the initial FTS design is that it has the capability to evolve as new technology becomes available. This paper will describe the FTS design approach for evolution to more autonomous capabilities. Some specific task applications of the FTS and partial automation approaches of these tasks will also be discussed in this paper.
Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie
2002-01-01
Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.
Hubble Space Telescope: cost reduction by re-engineering telemetry processing and archiving
NASA Astrophysics Data System (ADS)
Miebach, Manfred P.
1998-05-01
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system are planned to be in place for the third servicing mission in December 1999. The primary objectives of the ground system reengineering effort, a project called 'vision December 1999. The primary objectives of the ground system re-engineering effort, a project called 'vision 2000 control center systems (CCS)', are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs will be reduced by providing a modern hardware and software architecture and utilizing commercial of f the shelf (COTS) products wherever possible. Operating costs will be reduced by eliminating redundant legacy systems and processes and by providing an integrated ground system geared toward autonomous operation. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will include a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform- independent from an office environment using commercial standards. Latest internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a high level overview of the CCS system and will illustrate some of the CCS telemetry capabilities. Samples of CCS user interface pages will be given. Vision 2000 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
Autophagy induction for the treatment of cancer.
Pietrocola, Federico; Pol, Jonathan; Vacchelli, Erika; Baracco, Elisa E; Levesque, Sarah; Castoldi, Francesca; Maiuri, Maria Chiara; Madeo, Frank; Kroemer, Guido
2016-10-02
Cancer can be viewed in 2 rather distinct ways, namely (i) as a cell-autonomous disease in which malignant cells have escaped control from cell-intrinsic barriers against proliferation and dissemination or (ii) as a systemic disease that involves failing immune control of aberrant cells. Since macroautophagy/autophagy generally increases the fitness of cells as well as their resistance against endogenous or iatrogenic (i.e., relating to illness due to medical intervention) stress, it has been widely proposed that inhibition of autophagy would constitute a valid strategy for sensitizing cancer cells to chemotherapy or radiotherapy. Colliding with this cell-autonomous vision, however, we found that immunosurveillance against transplantable, carcinogen-induced or genetically engineered cancers can be improved by pharmacologically inducing autophagy with caloric restriction mimetics. This positive effect depends on autophagy induction in cancer cells and is mediated by alterations in extracellular ATP metabolism, namely increased release of immunostimulatory ATP and reduced adenosine-dependent recruitment of immunosuppressive regulatory T cells into the tumor bed. The combination of autophagy inducers and chemotherapeutic agents is particularly efficient in reducing cancer growth through the stimulation of CD8 + T lymphocyte-dependent anticancer immune responses.
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
An integrated dexterous robotic testbed for space applications
NASA Technical Reports Server (NTRS)
Li, Larry C.; Nguyen, Hai; Sauer, Edward
1992-01-01
An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.
The Flight Telerobotic Servicer (FTS) - A focus for automation and robotics on the Space Station
NASA Technical Reports Server (NTRS)
Hinkal, Sanford W.; Andary, James F.; Watzin, James G.; Provost, David E.
1987-01-01
The concept, fundamental design principles, and capabilities of the FTS, a multipurpose telerobotic system for use on the Space Station and Space Shuttle, are discussed. The FTS is intended to assist the crew in the performance of extravehicular tasks; the telerobot will also be used on the Orbital Maneuvering Vehicle to service free-flyer spacecraft. The FTS will be capable of both teleoperation and autonomous operation; eventually it may also utilize ground control. By careful selection of the functional architecture and a modular approach to the hardware and software design, the FTS can accept developments in artificial intelligence and newer, more advanced sensors, such as machine vision and collision avoidance.
Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau
NASA Astrophysics Data System (ADS)
Simpson, R. L., Jr.
1987-06-01
The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology.
Looking inside the Ocean: Toward an Autonomous Imaging System for Monitoring Gelatinous Zooplankton
Corgnati, Lorenzo; Marini, Simone; Mazzei, Luca; Ottaviani, Ennio; Aliani, Stefano; Conversi, Alessandra; Griffa, Annalisa
2016-01-01
Marine plankton abundance and dynamics in the open and interior ocean is still an unknown field. The knowledge of gelatinous zooplankton distribution is especially challenging, because this type of plankton has a very fragile structure and cannot be directly sampled using traditional net based techniques. To overcome this shortcoming, Computer Vision techniques can be successfully used for the automatic monitoring of this group.This paper presents the GUARD1 imaging system, a low-cost stand-alone instrument for underwater image acquisition and recognition of gelatinous zooplankton, and discusses the performance of three different methodologies, Tikhonov Regularization, Support Vector Machines and Genetic Programming, that have been compared in order to select the one to be run onboard the system for the automatic recognition of gelatinous zooplankton. The performance comparison results highlight the high accuracy of the three methods in gelatinous zooplankton identification, showing their good capability in robustly selecting relevant features. In particular, Genetic Programming technique achieves the same performances of the other two methods by using a smaller set of features, thus being the most efficient in avoiding computationally consuming preprocessing stages, that is a crucial requirement for running on an autonomous imaging system designed for long lasting deployments, like the GUARD1. The Genetic Programming algorithm has been installed onboard the system, that has been operationally tested in a two-months survey in the Ligurian Sea, providing satisfactory results in terms of monitoring and recognition performances. PMID:27983638
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Autonomous spacecraft landing through human pre-attentive vision.
Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E
2012-06-01
In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.
Review of somatic symptoms in post-traumatic stress disorder.
Gupta, Madhulika A
2013-02-01
Post-traumatic stress disorder (PTSD) is associated with both (1) 'ill-defined' or 'medically unexplained' somatic syndromes, e.g. unexplained dizziness, tinnitus and blurry vision, and syndromes that can be classified as somatoform disorders (DSM-IV-TR); and (2) a range of medical conditions, with a preponderance of cardiovascular, respiratory, musculoskeletal, neurological, and gastrointestinal disorders, diabetes, chronic pain, sleep disorders and other immune-mediated disorders in various studies. Frequently reported medical co-morbidities with PTSD across various studies include cardiovascular disease, especially hypertension, and immune-mediated disorders. PTSD is associated with limbic instability and alterations in both the hypothalamic- pituitary-adrenal and sympatho-adrenal medullary axes, which affect neuroendocrine and immune functions, have central nervous system effects resulting in pseudo-neurological symptoms and disorders of sleep-wake regulation, and result in autonomic nervous system dysregulation. Hypervigilance, a central feature of PTSD, can lead to 'local sleep' or regional arousal states, when the patient is partially asleep and partially awake, and manifests as complex motor and/or verbal behaviours in a partially conscious state. The few studies of the effects of standard PTSD treatments (medications, CBT) on PTSD-associated somatic syndromes report a reduction in the severity of ill-defined and autonomically mediated somatic symptoms, self-reported physical health problems, and some chronic pain syndromes.
Biologically inspired collision avoidance system for unmanned vehicles
NASA Astrophysics Data System (ADS)
Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.
2009-05-01
In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
Overview of the Autonomic Nervous System
... be reversible or progressive. Anatomy of the autonomic nervous system The autonomic nervous system is the part of ... organs they connect with. Function of the autonomic nervous system The autonomic nervous system controls internal body processes ...
A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge
2016-07-29
Science Foundation (NSF), Department of Defense (DOD), National Institute of Standards and Technology (NIST), Intelligence Community (IC) Introduction...multiple Federal agencies: • Intelligent big data sensors that act autonomously and are programmable via the network for increased flexibility, and... intelligence for scientific discovery enabled by rapid extreme-scale data analysis, capable of understanding and making sense of results and thereby
System-level challenges in pressure-operated soft robotics
NASA Astrophysics Data System (ADS)
Onal, Cagdas D.
2016-05-01
Last decade witnessed the revival of fluidic soft actuation. As pressure-operated soft robotics becomes more popular with promising recent results, system integration remains an outstanding challenge. Inspired greatly by biology, we envision future robotic systems to embrace mechanical compliance with bodies composed of soft and hard components as well as electronic and sensing sub-systems, such that robot maintenance starts to resemble surgery. In this vision, portable energy sources and driving infrastructure plays a key role to offer autonomous many-DoF soft actuation. On the other hand, while offering many advantages in safety and adaptability to interact with unstructured environments, objects, and human bodies, mechanical compliance also violates many inherent assumptions in traditional rigid-body robotics. Thus, a complete soft robotic system requires new approaches to utilize proprioception that provides rich sensory information while remaining flexible, and motion control under significant time delay. This paper discusses our proposed solutions for each of these system-level challenges in soft robotics research.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks
NASA Astrophysics Data System (ADS)
Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min
2015-10-01
Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.
Lightweight Active Object Retrieval with Weak Classifiers.
Czúni, László; Rashad, Metwally
2018-03-07
In the last few years, there has been a steadily growing interest in autonomous vehicles and robotic systems. While many of these agents are expected to have limited resources, these systems should be able to dynamically interact with other objects in their environment. We present an approach where lightweight sensory and processing techniques, requiring very limited memory and processing power, can be successfully applied to the task of object retrieval using sensors of different modalities. We use the Hough framework to fuse optical and orientation information of the different views of the objects. In the presented spatio-temporal perception technique, we apply active vision, where, based on the analysis of initial measurements, the direction of the next view is determined to increase the hit-rate of retrieval. The performance of the proposed methods is shown on three datasets loaded with heavy noise.
Lightweight Active Object Retrieval with Weak Classifiers
2018-01-01
In the last few years, there has been a steadily growing interest in autonomous vehicles and robotic systems. While many of these agents are expected to have limited resources, these systems should be able to dynamically interact with other objects in their environment. We present an approach where lightweight sensory and processing techniques, requiring very limited memory and processing power, can be successfully applied to the task of object retrieval using sensors of different modalities. We use the Hough framework to fuse optical and orientation information of the different views of the objects. In the presented spatio-temporal perception technique, we apply active vision, where, based on the analysis of initial measurements, the direction of the next view is determined to increase the hit-rate of retrieval. The performance of the proposed methods is shown on three datasets loaded with heavy noise. PMID:29518902
Cobalt: Development and Maturation of GN&C Technologies for Precision Landing
NASA Technical Reports Server (NTRS)
Carson, John M.; Restrepo, Carolina; Seubert, Carl; Amzajerdian, Farzin
2016-01-01
The CoOperative Blending of Autonomous Landing Technologies (COBALT) instrument is a terrestrial test platform for development and maturation of guidance, navigation and control (GN&C) technologies for precision landing. The project is developing a third-generation Langley Research Center (LaRC) navigation doppler lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the Jet Propulsion Laboratory (JPL) lander vision system (LVS) for terrain relative navigation (TRN) position estimates. These technologies together provide precise navigation knowledge that is critical for a controlled and precise touchdown. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive vertical test bed (VTB) developed by Masten Space Systems, and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).
Digital tripwire: a small automated human detection system
NASA Astrophysics Data System (ADS)
Fischer, Amber D.; Redd, Emmett; Younger, A. Steven
2009-05-01
A low cost, lightweight, easily deployable imaging sensor that can dependably discriminate threats from other activities within its field of view and, only then, alert the distant duty officer by transmitting a visual confirmation of the threat would provide a valuable asset to modern defense. At present, current solutions suffer from a multitude of deficiencies - size, cost, power endurance, but most notably, an inability to assess an image and conclude that it contains a threat. The human attention span cannot maintain critical surveillance over banks of displays constantly conveying such images from the field. DigitalTripwire is a small, self-contained, automated human-detection system capable of running for 1-5 days on two AA batteries. To achieve such long endurance, the DigitalTripwire system utilizes an FPGA designed with sleep functionality. The system uses robust vision algorithms, such as a partially unsupervised innovative backgroundmodeling algorithm, which employ several data reduction strategies to operate in real-time, and achieve high detection rates. When it detects human activity, either mounted or dismounted, it sends an alert including images to notify the command center. In this paper, we describe the hardware and software design of the DigitalTripwire system. In addition, we provide detection and false alarm rates across several challenging data sets demonstrating the performance of the vision algorithms in autonomously analyzing the video stream and classifying moving objects into four primary categories - dismounted human, vehicle, non-human, or unknown. Performance results across several challenging data sets are provided.
Anticipation as a Strategy: A Design Paradigm for Robotics
NASA Astrophysics Data System (ADS)
Williams, Mary-Anne; Gärdenfors, Peter; Johnston, Benjamin; Wightwick, Glenn
Anticipation plays a crucial role during any action, particularly in agents operating in open, complex and dynamic environments. In this paper we consider the role of anticipation as a strategy from a design perspective. Anticipation is a crucial skill in sporting games like soccer, tennis and cricket. We explore the role of anticipation in robot soccer matches in the context of reaching the RoboCup vision to develop a robot soccer team capable of defeating the FIFA World Champions in 2050. Anticipation in soccer can be planned or emergent but whether planned or emergent, anticipation can be designed. Two key obstacles stand in the way of developing more anticipatory robot systems; an impoverished understanding of the "anticipation" process/capability and a lack of know-how in the design of anticipatory systems. Several teams at RoboCup have developed remarkable preemptive behaviors. The CMU Dive and UTS Dodge are two compelling examples. In this paper we take steps towards designing robots that can adopt anticipatory behaviors by proposing an innovative model of anticipation as a strategy that specifies the key characteristics of anticipation behaviors to be developed. The model can drive the design of autonomous systems by providing a means to explore and to represent anticipation requirements. Our approach is to analyze anticipation as a strategy and then to use the insights obtained to design a reference model that can be used to specify a set of anticipatory requirements for guiding an autonomous robot soccer system.
Concepts and Benefits of Lunar Core Drilling
NASA Technical Reports Server (NTRS)
McNamara, K. M.; Bogard, D. D.; Derkowski, B. J.; George, J. A.; Askew, R. S.; Lindsay, J. F.
2007-01-01
Understanding lunar material at depth is critical to nearly every aspect of NASA s Vision and Strategic Plan. As we consider sending human s back to the Moon for brief and extended periods, we will need to utilize lunar materials in construction, for resource extraction, and for radiation shielding and protection. In each case, we will be working with materials at some depth beneath the surface. Understanding the properties of that material is critical, thus the need for Lunar core drilling capability. Of course, the science benefit from returning core samples and operating down-hole autonomous experiments is a key element of Lunar missions as defined by NASA s Exploration Systems Architecture Study. Lunar missions will be targeted to answer specific questions concerning lunar science and re-sources.
2011-03-01
b b are additive accelerometer and gyro noises and w b abias and wbbbias are accelerometer bias and gyro bias noises. These will described in further...order accelerometer bias time constant and w b abias is the additive accelerometer bias noise, and ḃb = − 1 τb bb +wbbbias (2.43) where τb is the first
Vision-Based Autonomous Sensor-Tasking in Uncertain Adversarial Environments
2015-01-02
motion segmentation and change detection in crowd behavior. In particular we investigated Finite Time Lyapunov Exponents, Perron Frobenius Operator and...deformation tensor [11]. On the other hand, eigenfunctions of, the Perron Frobenius operator can be used to detect Almost Invariant Sets (AIS) which are... Perron Frobenius operator. Finally, Figure 1.12d shows the ergodic partitions (EP) obtained based on the eigenfunctions of the Koopman operator
Autonomous Learning in Mobile Cognitive Machines
2017-11-25
2016). [18] Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016. [19] Vinyals, Oriol...the brain being evolved to support its mobility has been raised. In fact, as the project progressed, the researchers discovered that if one of the...deductive, relies on rule-based programming, and can solve complex problems, however, faces difficulties in learning and adaptability. The latter
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Autonomous Infrastructure for Observatory Operations
NASA Astrophysics Data System (ADS)
Seaman, R.
This is an era of rapid change from ancient human-mediated modes of astronomical practice to a vision of ever larger time domain surveys, ever bigger "big data", to increasing numbers of robotic telescopes and astronomical automation on every mountaintop. Over the past decades, facets of a new autonomous astronomical toolkit have been prototyped and deployed in support of numerous space missions. Remote and queue observing modes have gained significant market share on the ground. Archives and data-mining are becoming ubiquitous; astroinformatic techniques and virtual observatory standards and protocols are areas of active development. Astronomers and engineers, planetary and solar scientists, and researchers from communities as diverse as particle physics and exobiology are collaborating on a vast range of "multi-messenger" science. What then is missing?
Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits
NASA Astrophysics Data System (ADS)
Snider, Ross K.; Arathorn, David W.
2006-05-01
A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.
A Robust Compositional Architecture for Autonomous Systems
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara
2006-01-01
Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.
Use of Open Architecture Middleware for Autonomous Platforms
NASA Astrophysics Data System (ADS)
Naranjo, Hector; Diez, Sergio; Ferrero, Francisco
2011-08-01
Network Enabled Capabilities (NEC) is the vision for next-generation systems in the defence domain formulated by governments, the European Defence Agency (EDA) and the North Atlantic Treaty Organization (NATO). It involves the federation of military information systems, rather than just a simple interconnection, to provide each user with the "right information, right place, right time - and not too much". It defines openness, standardization and flexibility principles in military systems, likewise applicable in the civilian space applications.This paper provides the conclusions drawn from "Architecture for Embarked Middleware" (EMWARE) study, funded by the European Defence Agency (EDA).The aim of the EMWARE project was to provide the information and understanding to facilitate the adoption of informed decisions regarding the specification and implementation of Open Architecture Middleware in future distributed systems, linking it with the NEC goal.EMWARE project included the definition of four business cases, each devoted to a different field of application (Unmanned Aerial Vehicles, Helicopters, Unmanned Ground Vehicles and the Satellite Ground Segment).
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
Biologically-Inspired Concepts for Autonomic Self-Protection in Multiagent Systems
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2006-01-01
Biologically-inspired autonomous and autonomic systems (AAS) are essentially concerned with creating self-directed and self-managing systems based on metaphors &om nature and the human body, such as the autonomic nervous system. Agent technologies have been identified as a key enabler for engineering autonomy and autonomicity in systems, both in terms of retrofitting into legacy systems and in designing new systems. Handing over responsibility to systems themselves raises concerns for humans with regard to safety and security. This paper reports on the continued investigation into a strand of research on how to engineer self-protection mechanisms into systems to assist in encouraging confidence regarding security when utilizing autonomy and autonomicity. This includes utilizing the apoptosis and quiescence metaphors to potentially provide a self-destruct or self-sleep signal between autonomic agents when needed, and an ALice signal to facilitate self-identification and self-certification between anonymous autonomous agents and systems.
Experiences with a Barista Robot, FusionBot
NASA Astrophysics Data System (ADS)
Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou
In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.
Object apprehension using vision and touch
NASA Technical Reports Server (NTRS)
Bajcsy, R.; Stansfield, S. A.
1987-01-01
Researchers define object apprehension as the determination of the properties of an object and the relationships among these properties. They contrast this with recognition, which goes a step further to attach a label to the object as a whole. Apprehension is fundamental to manipulation. This is true whether the manipulation is being carried out by an autonomous robot or is the result of teleoperation involving sensory feedback. Researchers present an apprehension paradigm using both vision and touch. In this model, they define a representation for object apprehension in terms of a set of primitives and features, along with their relationships. This representation is the mechanism by which the data from the two modalities are combined. It is also the mechanism which drives the apprehension process.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.
2005-01-01
The fulfillment of the new US. National Vision for Space Exploration requires many new enabling technologies to accomplish the goal of utilizing space for commercial activities and for returning humans to the moon and extraterrestrial environments. Traditionally, flight structures are manufactured as complete systems and require humans to complete the integration and assembly in orbit. These structures are bulky and require the use of heavy launch vehicles to send the units to the desired location, e.g. International Space Station (ISS). This method requires a high degree of safety, numerous space walks and significant cost for the humans to perform the assembly in orbit. For example, for assembly and maintenance of the ISS, 52 Extravehicular Activities (EVA's) have been performed so far with a total EVA time of approximately 322 hours. Sixteen (16) shuttle flights haw been to the ISS to perform these activities with an approximate cost of $450M per mission. For future space missions, costs have to be reduced to reasonably achieve the exploration goals. One concept that has been proposed is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly operations. Assembly is autonomously performed when two components containing onboard electronics join after recognizing that the joint is appropriate and in the precise position and orientation required for assembly. The mechanism only activates when the specifications are correct and m a nominal range. After assembly, local sensors and electronics monitor the integrity of the joint for feedback to a master controller. To achieve this concept will require a shift in the methods for designing space structures. In addition, innovative techniques will be required to perform the assembly autonomously. Monitoring of the assembled joint will be necessary for safety and structural integrity. If a very large structure is to be assembled in orbit, then the number of integrity sensors will be significant. Thus simple, low cost sensors are integral to the success of this concept. This paper will address these issues and will propose a novel concept for assembling space structures autonomously. The paper will present Several autonomous assembly methods. Core technologies required to achieve in space assembly will be discussed and novel techniques for communicating, sensing, docking and assembly will be detailed. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Finally, these technologies can also be applied to other systems both on earth and extraterrestrial environments.
Apoptosis and Self-Destruct: A Contribution to Autonomic Agents?
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2004-01-01
Autonomic Computing (AC), a self-managing systems initiative based on the biological metaphor of the autonomic nervous system, is increasingly gaining momentum as the way forward in designing reliable systems. Agent technologies have been identified as a key enabler for engineering autonomicity in systems, both in terms of retrofitting autonomicity into legacy systems and designing new systems. The AC initiative provides an opportunity to consider other biological systems and principles in seeking new design strategies. This paper reports on one such investigation; utilizing the apoptosis metaphor of biological systems to provide a dynamic health indicator signal between autonomic agents.
Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro
2017-02-11
Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.
UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.
Robust perception algorithms for road and track autonomous following
NASA Astrophysics Data System (ADS)
Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge
2004-09-01
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...
Stabilization and control of quad-rotor helicopter using a smartphone device
NASA Astrophysics Data System (ADS)
Desai, Alok; Lee, Dah-Jye; Moore, Jason; Chang, Yung-Ping
2013-01-01
In recent years, autonomous, micro-unmanned aerial vehicles (micro-UAVs), or more specifically hovering micro- UAVs, have proven suitable for many promising applications such as unknown environment exploration and search and rescue operations. The early versions of UAVs had no on-board control capabilities, and were difficult for manual control from a ground station. Many UAVs now are equipped with on-board control systems that reduce the amount of control required from the ground-station operator. However, the limitations on payload, power consumption and control without human interference remain the biggest challenges. This paper proposes to use a smartphone as the sole computational device to stabilize and control a quad-rotor. The goal is to use the readily available sensors in a smartphone such as the GPS, the accelerometer, the rate-gyros, and the camera to support vision-related tasks such as flight stabilization, estimation of the height above ground, target tracking, obstacle detection, and surveillance. We use a quad-rotor platform that has been built in the Robotic Vision Lab at Brigham Young University for our development and experiments. An Android smartphone is connected through the USB port to an external hardware that has a microprocessor and circuitries to generate pulse-width modulation signals to control the brushless servomotors on the quad-rotor. The high-resolution camera on the smartphone is used to detect and track features to maintain a desired altitude level. The vision algorithms implemented include template matching, Harris feature detector, RANSAC similarity-constrained homography, and color segmentation. Other sensors are used to control yaw, pitch, and roll of the quad-rotor. This smartphone-based system is able to stabilize and control micro-UAVs and is ideal for micro-UAVs that have size, weight, and power limitations.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Viagra, surgery and anesthesia: a dangerous cocktail with a risk of blindness.
Fodale, V; Di Pietro, R; Santamaria, S
2007-01-01
Since the launch in 1998 of the anti-impotence drug sildenafil (viagra), the American food and drug administration has identified 50 cases of drug-related blindness, the so-called nonarteritic anterior ischemic optic neuropathy. This, very serious, side effect frequently leads to sudden, mostly irreversible loss of vision, and there is no proven effective treatment to cure patients or to prevent recurrence. The mechanism of ischemic optic neuropathy is not clear, but it could be related to the fact that the ophthalmic and central retinal arteries have an autoregulation of their own blood flow without any autonomic nerve supply; vasoreactivity could be lower albeit efficient, and therefore more vulnerable to systemic modifications of the circulation. But decreased visual acuity and loss of visual ability also are, although uncommon, anesthesiological and surgical complications. These data are consistent with the hypothesis that sildenafil, surgery and anesthesia, taken together, could be a potentially dangerous cocktail of risk factors for sudden irreversible loss of vision. To reduce the risk, sildenafil use should be avoided at least one week before surgical operations, since the reported cases of blindness developed 36h after drug ingestion.
INL Autonomous Navigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.
NASA Astrophysics Data System (ADS)
Fuchs, Thomas J.; Thompson, David R.; Bue, Brian D.; Castillo-Rogez, Julie; Chien, Steve A.; Gharibian, Dero; Wagstaff, Kiri L.
2015-10-01
Spacecraft autonomy is crucial to increase the science return of optical remote sensing observations at distant primitive bodies. To date, most small bodies exploration has involved short timescale flybys that execute prescripted data collection sequences. Light time delay means that the spacecraft must operate completely autonomously without direct control from the ground, but in most cases the physical properties and morphologies of prospective targets are unknown before the flyby. Surface features of interest are highly localized, and successful observations must account for geometry and illumination constraints. Under these circumstances onboard computer vision can improve science yield by responding immediately to collected imagery. It can reacquire bad data or identify features of opportunity for additional targeted measurements. We present a comprehensive framework for onboard computer vision for flyby missions at small bodies. We introduce novel algorithms for target tracking, target segmentation, surface feature detection, and anomaly detection. The performance and generalization power are evaluated in detail using expert annotations on data sets from previous encounters with primitive bodies.
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Hamledari, Hesam
In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.
Social Robotics in Therapy of Apraxia of Speech
Alonso-Martín, Fernando
2018-01-01
Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction. PMID:29713440
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System
NASA Astrophysics Data System (ADS)
Isik, Can
An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the dissertation are the description of the computer simulation of Pilot within the hierarchy of IMAS control and the simulated experiments that demonstrate the theoretical work.
Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus
2016-01-01
In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn. PMID:27801834
Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus
2016-10-28
In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.
Verification Test of Automated Robotic Assembly of Space Truss Structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1995-01-01
A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...
Asteroid Exploration with Autonomic Systems
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Rash, James; Rouff, Christopher; Hinchey, Mike
2004-01-01
NASA is studying advanced technologies for a future robotic exploration mission to the asteroid belt. The prospective ANTS (Autonomous Nano Technology Swarm) mission comprises autonomous agents including worker agents (small spacecra3) designed to cooperate in asteroid exploration under the overall authoriq of at least one ruler agent (a larger spacecraft) whose goal is to cause science data to be returned to Earth. The ANTS team (ruler plus workers and messenger agents), but not necessarily any individual on the team, will exhibit behaviors that qualify it as an autonomic system, where an autonomic system is defined as a system that self-reconfigures, self-optimizes, self-heals, and self-protects. Autonomic system concepts lead naturally to realistic, scalable architectures rich in capabilities and behaviors. In-depth consideration of a major mission like ANTS in terms of autonomic systems brings new insights into alternative definitions of autonomic behavior. This paper gives an overview of the ANTS mission and discusses the autonomic properties of the mission.
Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L; Rico-Saavedra, Jose M; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra
2017-05-21
Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.
Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L.; Rico-Saavedra, Jose M.; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra
2017-01-01
Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities. PMID:28531141
NASA Technical Reports Server (NTRS)
Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.
1994-01-01
A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.
A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft
NASA Astrophysics Data System (ADS)
Leishman, Robert C.
Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizing a relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Why Computer-Based Systems Should be Autonomic
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2005-01-01
The objective of this paper is to discuss why computer-based systems should be autonomic, where autonomicity implies self-managing, often conceptualized in terms of being self-configuring, self-healing, self-optimizing, self-protecting and self-aware. We look at motivations for autonomicity, examine how more and more systems are exhibiting autonomic behavior, and finally look at future directions.
Military Androids: A Vision for Human Replacement in 2035
2010-01-01
but has the potential to accelerate exponentially over the next twenty-five years due to reasons such as advancements in technology and aversion to...because of the enabling technologies associated with it. Asimo is capable of semi-autonomous operations and continued development will expand those...such as Moore’s Law or the Law of Accelerating Returns that attempt to account for the pace of technological improvements. Moore’s Law, developed by
2007-01-22
requirements for the degree of Master of Science, Plan II. Approval for the Report and Comprehensive Examination: Committee: Professor S. Shankar Sastry...13 2.4 Plans for the high-level planner. . . . . . . . . . . . . . . . . . . . . . . . 14 3.1 Idealized flight for purposes of analyzing...Stamping In order to use the RMFPP algorithm, we must first motion stamp each image, i.e. de - termine the orientation and position of the camera when
Range Image Processing for Local Navigation of an Autonomous Land Vehicle.
1986-09-01
such as doing long term exploration missions on the surface of the planets which mankind may wish to investigate . Certainly, mankind will soon return...intelligence programming, walking technology, and vision sensors to name but a few. 10 The purpose of this thesis will be to investigate , by simulation...bitmap graphics, both of which are important to this simulation. Finally, the methodology for displaying the symbolic information generated by the
Challenges in verification and validation of autonomous systems for space exploration
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Jonsson, Ari
2005-01-01
Space exploration applications offer a unique opportunity for the development and deployment of autonomous systems, due to limited communications, large distances, and great expense of direct operation. At the same time, the risk and cost of space missions leads to reluctance to taking on new, complex and difficult-to-understand technology. A key issue in addressing these concerns is the validation of autonomous systems. In recent years, higher-level autonomous systems have been applied in space applications. In this presentation, we will highlight those autonomous systems, and discuss issues in validating these systems. We will then look to future demands on validating autonomous systems for space, identify promising technologies and open issues.
Takotsubo cardiomyopathy associated with Miller-Fisher syndrome.
Gill, Dalvir; Liu, Kan
2017-07-01
51-year-old female who presented with progressive paresthesia, numbness of the lower extremities, double vision, and trouble walking. Physical exam was remarkable for areflexia, and ptosis. Her initial EKG showed nonspecific ST segment changes and her Troponin T was elevated to 0.41ng/mL which peaked at 0.66ng/mL. Echocardiogram showed a depressed left ventricular ejection fraction to 35% with severely hypokinetic anterior wall and left ventricular apex was severely hypokinetic. EMG nerve conduction study showed severely decreased conduction velocity and prolonged distal latency in all nerves consistent with demyelinating disease. She was treated with 5days of intravenous immunoglobulin therapy to which she showed significant improvement in strength in her lower extremities. Echocardiogram repeated 4days later showing an improved left ventricular ejection fraction of 55% and no left ventricular wall motion abnormalities. Takotsubo cardiomyopathy is a rare complication of Miller-Fisher syndrome and literature review did not reveal any cases. Miller-Fisher syndrome is an autoimmune process that affects the peripheral nervous system causing autonomic dysfunction which may involve the heart. Due to significant autonomic dysfunction in Miller-Fisher syndrome, it could lead to arrhythmias, blood pressure changes, acute coronary syndrome and myocarditis, Takotsubo cardiomyopathy can be difficult to distinguish. The treatment of Takotsubo cardiomyopathy is supportive with beta-blockers and angiotensin-converting enzyme inhibitors are recommended until left ventricle ejection fraction improvement. Takotsubo cardiomyopathy is a rare complication during the acute phase of Miller-Fisher syndrome and must be distinguished from autonomic dysfunction as both diagnoses have different approaches to treatment. Published by Elsevier Inc.
Isono, O; Kituda, A; Fujii, M; Yoshinaka, T; Nakagawa, G; Suzuki, Y
2018-09-01
In August 2003, 44 victims were poisoned by chemical warfare agents (CWAs) leaked from five drums that were excavated at a construction site in Qiqihar, Northeast China. The drums were abandoned by the former Japanese imperial army during World War II and contained a mixture of Sulfur mustard (SM) and Lewisite. We carried out a total of six regular check-ups between 2006 and 2014, and from 2008 we added neurological evaluations including neuropsychological test and autonomic nervous function test in parallel with medical follow-up as much as was possible. Severe autonomic failure, such as hyperhidrosis, pollakiuria, diarrhoea, diminished libido, and asthenia appeared in almost all victims. Polyneuropathy occurred in 35% of the victims and constricted vision occurred in 20% of them. The rates of abnormal response on cold pressor test (CPT), active standing test (AST), Heart rate variability (CV R-R ), performed in 2014, were 63.1%, 31.6%, and 15.9%, respectively. On neuropsychological testing evaluated in 2010, a generalized cognitive decline was observed in 42% of the victims. Memories and visuospatial abilities were affected in the remaining victims. Finally, a 17-item PTSD questionnaire and the Beck Depression Inventory evaluated in 2014 revealed long-lasting severe PTSD symptoms and depression of the victims. Our findings suggest that an SM/Lewisite compound have significant adverse consequences directly in cognitive and emotional network and autonomic nervous systems in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Bio-inspired Autonomic Structures: a middleware for Telecommunications Ecosystems
NASA Astrophysics Data System (ADS)
Manzalini, Antonio; Minerva, Roberto; Moiso, Corrado
Today, people are making use of several devices for communications, for accessing multi-media content services, for data/information retrieving, for processing, computing, etc.: examples are laptops, PDAs, mobile phones, digital cameras, mp3 players, smart cards and smart appliances. One of the most attracting service scenarios for future Telecommunications and Internet is the one where people will be able to browse any object in the environment they live: communications, sensing and processing of data and services will be highly pervasive. In this vision, people, machines, artifacts and the surrounding space will create a kind of computational environment and, at the same time, the interfaces to the network resources. A challenging technological issue will be interconnection and management of heterogeneous systems and a huge amount of small devices tied together in networks of networks. Moreover, future network and service infrastructures should be able to provide Users and Application Developers (at different levels, e.g., residential Users but also SMEs, LEs, ASPs/Web2.0 Service roviders, ISPs, Content Providers, etc.) with the most appropriate "environment" according to their context and specific needs. Operators must be ready to manage such level of complication enabling their latforms with technological advanced allowing network and services self-supervision and self-adaptation capabilities. Autonomic software solutions, enhanced with innovative bio-inspired mechanisms and algorithms, are promising areas of long term research to face such challenges. This chapter proposes a bio-inspired autonomic middleware capable of leveraging the assets of the underlying network infrastructure whilst, at the same time, supporting the development of future Telecommunications and Internet Ecosystems.
Localization and recognition of traffic signs for automated vehicle control systems
NASA Astrophysics Data System (ADS)
Zadeh, Mahmoud M.; Kasvand, T.; Suen, Ching Y.
1998-01-01
We present a computer vision system for detection and recognition of traffic signs. Such systems are required to assist drivers and for guidance and control of autonomous vehicles on roads and city streets. For experiments we use sequences of digitized photographs and off-line analysis. The system contains four stages. First, region segmentation based on color pixel classification called SRSM. SRSM limits the search to regions of interest in the scene. Second, we use edge tracing to find parts of outer edges of signs which are circular or straight, corresponding to the geometrical shapes of traffic signs. The third step is geometrical analysis of the outer edge and preliminary recognition of each candidate region, which may be a potential traffic sign. The final step in recognition uses color combinations within each region and model matching. This system maybe used for recognition of other types of objects, provided that the geometrical shape and color content remain reasonably constant. The method is reliable, easy to implement, and fast, This differs form the road signs recognition method in the PROMETEUS. The overall structure of the approach is sketched.
Intelligent excavator control system for lunar mining system
NASA Astrophysics Data System (ADS)
Lever, Paul J. A.; Wang, Fei-Yue
1995-01-01
A major benefit of utilizing local planetary resources is that it reduces the need and cost of lifting materials from the Earth's surface into Earth orbit. The location of the moon makes it an ideal site for harvesting the materials needed to assist space activities. Here, lunar excavation will take place in the dynamic unstructured lunar environment, in which conditions are highly variable and unpredictable. Autonomous mining (excavation) machines are necessary to remove human operators from this hazardous environment. This machine must use a control system structure that can identify, plan, sense, and control real-time dynamic machine movements in the lunar environment. The solution is a vision-based hierarchical control structure. However, excavation tasks require force/torque sensor feedback to control the excavation tool after it has penetrated the surface. A fuzzy logic controller (FLC) is used to interpret the forces and torques gathered from a bucket mounted force/torque sensor during excavation. Experimental results from several excavation tests using the FLC are presented here. These results represent the first step toward an integrated sensing and control system for a lunar mining system.
Conditions for Fully Autonomous Anticipation
NASA Astrophysics Data System (ADS)
Collier, John
2006-06-01
Anticipation allows a system to adapt to conditions that have not yet come to be, either externally to the system or internally. Autonomous systems actively control the conditions of their own existence so as to increase their overall viability. This paper will first give minimal necessary and sufficient conditions for autonomous anticipation, followed by a taxonomy of autonomous anticipation. In more complex systems, there can be semi-autonomous subsystems that can anticipate and adapt on their own. Such subsystems can be integrated into a system's overall autonomy, typically with greater efficiency due to modularity and specialization of function. However, it is also possible that semi-autonomous subsystems can act against the viability of the overall system, and have their own functions that conflict with overall system functions.
The Joint Tactical Aerial Resupply Vehicle Impact on Sustainment Operations
2017-06-09
Artificial Intelligence , Sustainment Operations, Rifle Company, Autonomous Aerial Resupply, Joint Tactical Autonomous Aerial Resupply System 16...Integrations and Development System AI Artificial Intelligence ARCIC Army Capabilities Integration Center ARDEC Armament Research, Development and...semi- autonomous systems, and fully autonomous systems. Autonomy of machines depends on sophisticated software, including Artificial Intelligence
Autonomic Function in Infancy.
ERIC Educational Resources Information Center
Fox, Nathan A.; Fitzgerald, Hiram E.
1990-01-01
Reviews research that uses autonomic responses of human infants as dependent measures. Focuses on the history of research on the autonomic nervous system, measurement issues, and autonomic correlates of infant behavior and systems. (RJC)
Methods of determining complete sensor requirements for autonomous mobility
NASA Technical Reports Server (NTRS)
Curtis, Steven A. (Inventor)
2012-01-01
A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.
Wei, Wenhui; Gao, Zhaohui; Gao, Shesheng; Jia, Ke
2018-04-09
In order to meet the requirements of autonomy and reliability for the navigation system, combined with the method of measuring speed by using the spectral redshift information of the natural celestial bodies, a new scheme, consisting of Strapdown Inertial Navigation System (SINS)/Spectral Redshift (SRS)/Geomagnetic Navigation System (GNS), is designed for autonomous integrated navigation systems. The principle of this SINS/SRS/GNS autonomous integrated navigation system is explored, and the corresponding mathematical model is established. Furthermore, a robust adaptive central difference particle filtering algorithm is proposed for this autonomous integrated navigation system. The simulation experiments are conducted and the results show that the designed SINS/SRS/GNS autonomous integrated navigation system possesses good autonomy, strong robustness and high reliability, thus providing a new solution for autonomous navigation technology.
Skinner-Rusk unified formalism for higher-order systems
NASA Astrophysics Data System (ADS)
Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso
2012-07-01
The Lagrangian-Hamiltonian unified formalism of R. Skinner and R. Rusk was originally stated for autonomous dynamical systems in classical mechanics. It has been generalized for non-autonomous first-order mechanical systems, first-order and higher-order field theories, and higher-order autonomous systems. In this work we present a generalization of this formalism for higher-order non-autonomous mechanical systems.
Assured Human-Autonomy Interaction through Machine Self-Confidence
NASA Astrophysics Data System (ADS)
Aitken, Matthew
Autonomous systems employ many layers of approximations in order to operate in increasingly uncertain and unstructured environments. The complexity of these systems makes it hard for a user to understand the systems capabilities, especially if the user is not an expert. However, if autonomous systems are to be used efficiently, their users must trust them appropriately. This purpose of this work is to implement and assess an 'assurance' that an autonomous system can provide to the user to elicit appropriate trust. Specifically, the autonomous system's perception of its own capabilities is reported to the user as the self-confidence assurance. The self-confidence assurance should allow the user to more quickly and accurately assess the autonomous system's capabilities, generating appropriate trust in the autonomous system. First, this research defines self-confidence and discusses what the self-confidence assurance is attempting to communicate to the user. Then it provides a framework for computing the autonomous system's self-confidence as a function of self-confidence factors which correspond to individual elements in the autonomous system's process. In order to explore this idea, self-confidence is implemented on an autonomous system that uses a mixed observability Markov decision process model to solve a pursuit-evasion problem on a road network. The implementation of a factor assessing the goodness of the autonomy's expected performance is focused on in particular. This work highlights some of the issues and considerations in the design of appropriate metrics for the self-confidence factors, and provides the basis for future research for computing self-confidence in autonomous systems.
Intelligent autonomy for unmanned naval systems
NASA Astrophysics Data System (ADS)
Steinberg, Marc
2006-05-01
This paper provides an overview of the development and demonstration of intelligent autonomy technologies for control of heterogeneous unmanned naval air and sea vehicles and describes some of the current limitations of such technologies. The focus is on modular technologies that support highly automated retasking and fully autonomous dynamic replanning for up to ten heterogeneous unmanned systems based on high-level mission objectives, priorities, constraints, and Rules-of-Engagement. A key aspect of the demonstrations is incorporating frequent naval operator evaluations in order to gain better understanding of the integrated man/machine system and its tactical utility. These evaluations help ensure that the automation can provide information to the user in a meaningful way and that the user has a sufficient level of control and situation awareness to task the system as needed to complete complex mission tasks. Another important aspect of the program is examination of the interactions of higher-level autonomy algorithms with other relevant components that would be needed within the decision-making and control loops. Examples of these are vision and other sensor processing algorithms, sensor fusion, obstacle avoidance, and other lower level vehicle autonomous navigation, guidance, and control functions. Initial experiments have been completed using medium and high-fidelity vehicle simulations in a virtual warfare environment and inexpensive surrogate vehicles in flight and in-water demonstrations. Simulation experiments included integration of multi-vehicle task allocation, dynamic replanning under constraints, lower level autonomous vehicle control, automatic assessment of the impact of contingencies on plans, management of situation awareness data, operator alert management, and a mixed-initiative operator interface. In-water demonstrations of a maritime situation awareness capability were completed in both a river and a harbor environment using unmanned surface vehicles and a buoy as surrogate platforms. In addition, a multiple heterogeneous vehicle demonstration was performed using five different types of small unmanned air and ground vehicles. This provided some initial experimentation with specifying tasking for high-level mission objectives and then mapping those objectives onto heterogeneous unmanned vehicles that each have different lower-level autonomy software. Finally, this paper will discuss lessons learned.
Systems, methods and apparatus for quiesence of autonomic systems with self action
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)
2011-01-01
Systems, methods and apparatus are provided in which an autonomic unit or element is quiesced. A quiesce component of an autonomic unit can cause the autonomic unit to self-destruct if a stay-alive reprieve signal is not received after a predetermined time.
IPS - a vision aided navigation system
NASA Astrophysics Data System (ADS)
Börner, Anko; Baumbach, Dirk; Buder, Maximilian; Choinowski, Andre; Ernst, Ines; Funk, Eugen; Grießbach, Denis; Schischmanow, Adrian; Wohlfeil, Jürgen; Zuev, Sergey
2017-04-01
Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one's own position, can guidance be provided, inspections be executed, and autonomous vehicles be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on the human head and its navigation sensors - the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and an inertial measurement unit for determining an ego pose in six degrees of freedom in a local coordinate system. IPS is able to operate in real time and can be applied for indoor and outdoor scenarios without any external reference or prior knowledge. In this paper, the system and its key hardware and software components are introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The developer team started from scratch and transfers this technology into a commercial product right now. The paper finishes with an outlook.
Autonomic Computing for Spacecraft Ground Systems
NASA Technical Reports Server (NTRS)
Li, Zhenping; Savkli, Cetin; Jones, Lori
2007-01-01
Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.
From Here to Autonomicity: Self-Managing Agents and the Biological Metaphors that Inspire Them
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2005-01-01
We seek inspiration for self-managing systems from (obviously, pre-existing) biological mechanisms. Autonomic Computing (AC), a self-managing systems initiative based on the biological metaphor of the autonomic nervous system, is increasingly gaining momentum as the way forward for integrating and designing reliable systems, while agent technologies have been identified as a key enabler for engineering autonomicity in systems. This paper looks at other biological metaphors such as reflex and healing, heart- beat monitors, pulse monitors and apoptosis for assisting in the realization of autonomicity.
Sensing, Control, and System Integration for Autonomous Vehicles: A Series of Challenges
NASA Astrophysics Data System (ADS)
Özgüner, Ümit; Redmill, Keith
One of the important examples of mechatronic systems can be found in autonomous ground vehicles. Autonomous ground vehicles provide a series of challenges in sensing, control and system integration. In this paper we consider off-road autonomous vehicles, automated highway systems and urban autonomous driving and indicate the unifying aspects. We specifically consider our own experience during the last twelve years in various demonstrations and challenges in attempting to identify unifying themes. Such unifying themes can be observed in basic hierarchies, hybrid system control approaches and sensor fusion techniques.
Bioinspired engineering of exploration systems for NASA and DoD
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Chahl, Javaan; Srinivasan, M. V.; Young, L.; Werblin, Frank; Hine, Butler; Zornetzer, Steven
2002-01-01
A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers.
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Jones, Thomas C.; Doggett, W. R.; Brady, Jeffrey S.; Berry, Felecia C.; Ganoe, George G.; Anderson, Eric; King, Bruce D.; Mercer, David C.
2011-01-01
The first generation of a versatile high performance device for performing payload handling and assembly operations on planetary surfaces, the Lightweight Surface Manipulation System (LSMS), has been designed and built. Over the course of its development, conventional crane type payload handling configurations and operations have been successfully demonstrated and the range of motion, types of operations and the versatility greatly expanded. This enhanced set of 1st generation LSMS hardware is now serving as a laboratory test-bed allowing the continuing development of end effectors, operational techniques and remotely controlled and automated operations. This paper describes the most recent LSMS and test-bed development activities, that have focused on two major efforts. The first effort was to complete a preliminary design of the 2nd generation LSMS that has the capability for limited mobility and can reposition itself between lander decks, mobility chassis, and fixed base locations. A major portion of this effort involved conducting a study to establish the feasibility of, and define, the specifications for a lightweight cable-drive waist joint. The second effort was to continue expanding the versatility and autonomy of large planetary surface manipulators using the 1st generation LSMS as a test-bed. This has been accomplished by increasing manipulator capabilities and efficiencies through both design changes and tool and end effector development. A software development effort has expanded the operational capabilities of the LSMS test-bed to include; autonomous operations based on stored paths, use of a vision system for target acquisition and tracking, and remote command and control over a communications bridge.
Color indirect effects on melatonin regulation
NASA Astrophysics Data System (ADS)
Mian, Tian; Liu, Timon C.; Li, Yan
2002-04-01
Color indirect effect (CIE) is referred to as the physiological and psychological effects of color resulting from color vision. In previous papers, we have studied CIE from the viewpoints of the integrated western and Chinese traditional medicine, put forward the color-autonomic- nervous-subsystem model (CAM), and provided its time-theory foundation. In this paper, we applied it to study light effects on melatonin regulation in humans, and suggested that it is CIE that mediates light effects on melatonin suppression.
Theory, Design, and Algorithms for Optimal Control of wireless Networks
2010-06-09
The implementation of network-centric warfare technologies is an abiding, critical interest of Air Force Science and Technology efforts for the Warfighter. Wireless communications, strategic signaling are areas of critical Air Force Mission need. Autonomous networks of multiple, heterogeneous Throughput enhancement and robust connectivity in communications and sensor networks are critical factors in net-centric USAF operations. This research directly supports the Air Force vision of information dominance and the development of anywhere, anytime operational readiness.
2006-06-01
conventional camera vs. thermal imager vs. night vision; camera field of view (narrow, wide, panoramic); keyboard + mouse vs. joystick control vs...motorised platform which could scan the immediate area, producing a 360o panorama of “stitched-together” digital pictures. The picture file, together with...VBS was used to automate the process of creating a QuickTime panorama (.mov or .qt), which includes the initial retrieval of the images, the
Thermal and range fusion for a planetary rover
NASA Technical Reports Server (NTRS)
Caillas, Claude
1992-01-01
This paper describes how fusion between thermal and range imaging allows us to discriminate different types of materials in outdoor scenes. First, we analyze how pure vision segmentation algorithms applied to thermal images allow discriminating materials such as rock and sand. Second, we show how combining thermal and range information allows us to better discriminate rocks from sand. Third, as an application, we examine how an autonomous legged robot can use these techniques to explore other planets.
Strategy in the Robotic Age: A Case for Autonomous Warfare
2014-09-01
6. Robots and Robotics The term robot is a loaded word. For many people it conjures a vision of fictional characters from movies like The...released in the early 1930s to review the experiences of WWI, it was censored , and a version modified to maintain the institutional legacies was...apprehensive, and doctrine was non-existent. Today, America is emerging from two wars and subsequently a war-weary public. The United States is a
1998-10-01
of motivation , acculturation, education, training and potentially action. This is a process of years. The timelines associated with each level of...payload and approximately $5K cost). This backpack robot clearly is most suited to visual scouting of threatening environments, including inside...Organic Man Portable, Ground Vehicle Backpack , Reusable Launch Techniques Command, Telemetry, and Image Return Deployment VTOL, Ship-Capable Autonomous
Autonomous Power System intelligent diagnosis and control
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony
1991-01-01
The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.
Autonomous power system intelligent diagnosis and control
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.; Merolla, Anthony
1991-01-01
The Autonomous Power System (APS) project at NASA Lewis Research Center is designed to demonstrate the abilities of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution hardware. Knowledge-based software provides a robust method of control for highly complex space-based power systems that conventional methods do not allow. The project consists of three elements: the Autonomous Power Expert System (APEX) for fault diagnosis and control, the Autonomous Intelligent Power Scheduler (AIPS) to determine system configuration, and power hardware (Brassboard) to simulate a space based power system. The operation of the Autonomous Power System as a whole is described and the responsibilities of the three elements - APEX, AIPS, and Brassboard - are characterized. A discussion of the methodologies used in each element is provided. Future plans are discussed for the growth of the Autonomous Power System.
Wei, Wenhui; Gao, Zhaohui; Gao, Shesheng; Jia, Ke
2018-01-01
In order to meet the requirements of autonomy and reliability for the navigation system, combined with the method of measuring speed by using the spectral redshift information of the natural celestial bodies, a new scheme, consisting of Strapdown Inertial Navigation System (SINS)/Spectral Redshift (SRS)/Geomagnetic Navigation System (GNS), is designed for autonomous integrated navigation systems. The principle of this SINS/SRS/GNS autonomous integrated navigation system is explored, and the corresponding mathematical model is established. Furthermore, a robust adaptive central difference particle filtering algorithm is proposed for this autonomous integrated navigation system. The simulation experiments are conducted and the results show that the designed SINS/SRS/GNS autonomous integrated navigation system possesses good autonomy, strong robustness and high reliability, thus providing a new solution for autonomous navigation technology. PMID:29642549
Using multiple sensors for printed circuit board insertion
NASA Technical Reports Server (NTRS)
Sood, Deepak; Repko, Michael C.; Kelley, Robert B.
1989-01-01
As more and more activities are performed in space, there will be a greater demand placed on the information handling capacity of people who are to direct and accomplish these tasks. A promising alternative to full-time human involvement is the use of semi-autonomous, intelligent robot systems. To automate tasks such as assembly, disassembly, repair and maintenance, the issues presented by environmental uncertainties need to be addressed. These uncertainties are introduced by variations in the computed position of the robot at different locations in its work envelope, variations in part positioning, and tolerances of part dimensions. As a result, the robot system may not be able to accomplish the desired task without the help of sensor feedback. Measurements on the environment allow real time corrections to be made to the process. A design and implementation of an intelligent robot system which inserts printed circuit boards into a card cage are presented. Intelligent behavior is accomplished by coupling the task execution sequence with information derived from three different sensors: an overhead three-dimensional vision system, a fingertip infrared sensor, and a six degree of freedom wrist-mounted force/torque sensor.
Localization from Visual Landmarks on a Free-Flying Robot
NASA Technical Reports Server (NTRS)
Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert
2016-01-01
We present the localization approach for Astrobee,a new free-flying robot designed to navigate autonomously on board the International Space Station (ISS). Astrobee will conduct experiments in microgravity, as well as assisst astronauts and ground controllers. Astrobee replaces the SPHERES robots which currently operate on the ISS, which were limited to operating in a small cube since their localization system relied on triangulation from ultrasonic transmitters. Astrobee localizes with only monocular vision and an IMU, enabling it to traverse the entire US segment of the station. Features detected on a previously-built map, optical flow information,and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise.Finally, we extensively evaluate the behavior of the filter on atwo-dimensional testing surface.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
Axelrod, Felicia B
2013-03-01
Genetic disorders affecting the autonomic nervous system can result in abnormal development of the nervous system or they can be caused by neurotransmitter imbalance, an ion-channel disturbance or by storage of deleterious material. The symptoms indicating autonomic dysfunction, however, will depend upon whether the genetic lesion has disrupted peripheral or central autonomic centers or both. Because the autonomic nervous system is pervasive and affects every organ system in the body, autonomic dysfunction will result in impaired homeostasis and symptoms will vary. The possibility of genetic confirmation by molecular testing for specific diagnosis is increasing but treatments tend to remain only supportive and directed toward particular symptoms. Copyright © 2013 Elsevier Inc. All rights reserved.
Implication of altered autonomic control for orthostatic tolerance in SCI.
Wecht, Jill Maria; Bauman, William A
2018-01-01
Neural output from the sympathetic and parasympathetic branches of the autonomic nervous system (ANS) are integrated to appropriately control cardiovascular responses during routine activities of daily living including orthostatic positioning. Sympathetic control of the upper extremity vasculature and the heart arises from the thoracic cord between T1 and T5, whereas splanchnic bed and lower extremity vasculature receive sympathetic neural input from the lower cord between segments T5 and L2. Although the vasculature is not directly innervated by the parasympathetic nervous system, the SA node is innervated by post-ganglionic vagal nerve fibers via cranial nerve X. Segmental differences in sympathetic cardiovascular innervation highlight the effect of lesion level on orthostatic cardiovascular control following spinal cord injury (SCI). Due to impaired sympathetic cardiovascular control, many individuals with SCI, particularly those with lesions above T6, are prone to orthostatic hypotension (OH) and orthostatic intolerance (OI). Symptomatic OH, which may result in OI, is a consequence of episodic reductions in cerebral perfusion pressure and the symptoms may include: dizziness, lightheadedness, nausea, blurred vision, ringing in the ears, headache and syncope. However, many, if not most, individuals with SCI who experience persistent and episodic hypotension and OH do not report symptoms of cerebral hypoperfusion and therefore do not raise clinical concern. This review will discuss the mechanism underlying OH and OI following SCI, and will review our knowledge to date regarding the prevalence, consequences and possible treatment options for these conditions in the SCI population. Published by Elsevier B.V.
Exploration Medical Capability System Engineering Introduction and Vision
NASA Technical Reports Server (NTRS)
Mindock, J.; Reilly, J.
2017-01-01
Human exploration missions to beyond low Earth orbit destinations such as Mars will require more autonomous capability compared to current low Earth orbit operations. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its integrative goals. This talk will briefly introduce the discipline of systems engineering and key points in its application to exploration medical capability development. It will elucidate technical medical system needs to be met by the systems engineering work, and the structured and integrative science and engineering approach to satisfying those needs, including the development of shared mental and qualitative models within and external to the human health and performance community. These efforts are underway to ensure relevancy to exploration system maturation and to establish medical system development that is collaborative with vehicle and mission design and engineering efforts.
Mission Control Technologies: A New Way of Designing and Evolving Mission Systems
NASA Technical Reports Server (NTRS)
Trimble, Jay; Walton, Joan; Saddler, Harry
2006-01-01
Current mission operations systems are built as a collection of monolithic software applications. Each application serves the needs of a specific user base associated with a discipline or functional role. Built to accomplish specific tasks, each application embodies specialized functional knowledge and has its own data storage, data models, programmatic interfaces, user interfaces, and customized business logic. In effect, each application creates its own walled-off environment. While individual applications are sometimes reused across multiple missions, it is expensive and time consuming to maintain these systems, and both costly and risky to upgrade them in the light of new requirements or modify them for new purposes. It is even more expensive to achieve new integrated activities across a set of monolithic applications. These problems impact the lifecycle cost (especially design, development, testing, training, maintenance, and integration) of each new mission operations system. They also inhibit system innovation and evolution. This in turn hinders NASA's ability to adopt new operations paradigms, including increasingly automated space systems, such as autonomous rovers, autonomous onboard crew systems, and integrated control of human and robotic missions. Hence, in order to achieve NASA's vision affordably and reliably, we need to consider and mature new ways to build mission control systems that overcome the problems inherent in systems of monolithic applications. The keys to the solution are modularity and interoperability. Modularity will increase extensibility (evolution), reusability, and maintainability. Interoperability will enable composition of larger systems out of smaller parts, and enable the construction of new integrated activities that tie together, at a deep level, the capabilities of many of the components. Modularity and interoperability together contribute to flexibility. The Mission Control Technologies (MCT) Project, a collaboration of multiple NASA Centers, led by NASA Ames Research Center, is building a framework to enable software to be assembled from flexible collections of components and services.
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Claunch, Charles; Mathis, Frank
2017-01-01
The Modular Autonomous Systems Technology (MAST) framework is a tool for building distributed, hierarchical autonomous systems. Originally intended for the autonomous monitoring and control of spacecraft, this framework concept provides support for variable autonomy, assume-guarantee contracts, and efficient communication between subsystems and a centralized systems manager. MAST was developed at NASA's Johnson Space Center (JSC) and has been applied to an integrated spacecraft example scenario.
Autonomous and Autonomic Swarms
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Truszkowski, Walter F.; Rouff, Christopher A.; Sterritt, Roy
2005-01-01
A watershed in systems engineering is represented by the advent of swarm-based systems that accomplish missions through cooperative action by a (large) group of autonomous individuals each having simple capabilities and no global knowledge of the group s objective. Such systems, with individuals capable of surviving in hostile environments, pose unprecedented challenges to system developers. Design and testing and verification at much higher levels will be required, together with the corresponding tools, to bring such systems to fruition. Concepts for possible future NASA space exploration missions include autonomous, autonomic swarms. Engineering swarm-based missions begins with understanding autonomy and autonomicity and how to design, test, and verify systems that have those properties and, simultaneously, the capability to accomplish prescribed mission goals. Formal methods-based technologies, both projected and in development, are described in terms of their potential utility to swarm-based system developers.
NASA Astrophysics Data System (ADS)
Nguyen, Lam; Wong, David; Ressler, Marc; Koenig, Francois; Stanton, Brian; Smith, Gregory; Sichina, Jeffrey; Kappra, Karl
2007-04-01
The U.S. Army Research Laboratory (ARL), as part of a mission and customer funded exploratory program, has developed a new low-frequency, ultra-wideband (UWB) synthetic aperture radar (SAR) for forward imaging to support the Army's vision of an autonomous navigation system for robotic ground vehicles. These unmanned vehicles, equipped with an array of imaging sensors, will be tasked to help detect man-made obstacles such as concealed targets, enemy minefields, and booby traps, as well as other natural obstacles such as ditches, and bodies of water. The ability of UWB radar technology to help detect concealed objects has been documented in the past and could provide an important obstacle avoidance capability for autonomous navigation systems, which would improve the speed and maneuverability of these vehicles and consequently increase the survivability of the U. S. forces on the battlefield. One of the primary features of the radar is the ability to collect and process data at combat pace in an affordable, compact, and lightweight package. To achieve this, the radar is based on the synchronous impulse reconstruction (SIRE) technique where several relatively slow and inexpensive analog-to-digital (A/D) converters are used to sample the wide bandwidth of the radar signals. We conducted an experiment this winter at Aberdeen Proving Ground (APG) to support the phenomenological studies of the backscatter from positive and negative obstacles for autonomous robotic vehicle navigation, as well as the detection of concealed targets of interest to the Army. In this paper, we briefly describe the UWB SIRE radar and the test setup in the experiment. We will also describe the signal processing and the forward imaging techniques used in the experiment. Finally, we will present imagery of man-made obstacles such as barriers, concertina wires, and mines.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-17
... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...
Autonomic Nervous System Disorders
Your autonomic nervous system is the part of your nervous system that controls involuntary actions, such as the beating of your heart ... breathing and swallowing Erectile dysfunction in men Autonomic nervous system disorders can occur alone or as the result ...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864
Autonomous Operations System: Development and Application
NASA Technical Reports Server (NTRS)
Toro Medina, Jaime A.; Wilkins, Kim N.; Walker, Mark; Stahl, Gerald M.
2016-01-01
Autonomous control systems provides the ability of self-governance beyond the conventional control system. As the complexity of mechanical and electrical systems increases, there develops a natural drive for developing robust control systems to manage complicated operations. By closing the bridge between conventional automated systems to knowledge based self-awareness systems, nominal control of operations can evolve into relying on safe critical mitigation processes to support any off-nominal behavior. Current research and development efforts lead by the Autonomous Propellant Loading (APL) group at NASA Kennedy Space Center aims to improve cryogenic propellant transfer operations by developing an automated control and health monitoring system. As an integrated systems, the center aims to produce an Autonomous Operations System (AOS) capable of integrating health management operations with automated control to produce a fully autonomous system.
Control of a free-flying robot manipulator system
NASA Technical Reports Server (NTRS)
Alexander, H.; Cannon, R. H., Jr.
1985-01-01
The goal of the research is to develop and test control strategies for a self-contained, free flying space robot. Such a robot would perform operations in space similar to those currently handled by astronauts during extravehicular activity (EVA). The focus of the work is to develop and carry out a program of research with a series of physical Satellite Robot Simulator Vehicles (SRSV's), two-dimensionally freely mobile laboratory models of autonomous free-flying space robots such as might perform extravehicular functions associated with operation of a space station or repair of orbiting satellites. The development of the SRSV and of some of the controller subsystems are discribed. The two-link arm was fitted to the SRSV base, and researchers explored the open-loop characteristics of the arm and thruster actuators. Work began on building the software foundation necessary for use of the on-board computer, as well as hardware and software for a local vision system for target identification and tracking.
Nonlinearity analysis of measurement model for vision-based optical navigation system
NASA Astrophysics Data System (ADS)
Li, Jianguo; Cui, Hutao; Tian, Yang
2015-02-01
In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.
COBALT: Development of a Platform to Flight Test Lander GN&C Technologies on Suborbital Rockets
NASA Technical Reports Server (NTRS)
Carson, John M., III; Seubert, Carl R.; Amzajerdian, Farzin; Bergh, Chuck; Kourchians, Ara; Restrepo, Carolina I.; Villapando, Carlos Y.; O'Neal, Travis V.; Robertson, Edward A.; Pierrottet, Diego;
2017-01-01
The NASA COBALT Project (CoOperative Blending of Autonomous Landing Technologies) is developing and integrating new precision-landing Guidance, Navigation and Control (GN&C) technologies, along with developing a terrestrial fight-test platform for Technology Readiness Level (TRL) maturation. The current technologies include a third- generation Navigation Doppler Lidar (NDL) sensor for ultra-precise velocity and line- of-site (LOS) range measurements, and the Lander Vision System (LVS) that provides passive-optical Terrain Relative Navigation (TRN) estimates of map-relative position. The COBALT platform is self contained and includes the NDL and LVS sensors, blending filter, a custom compute element, power unit, and communication system. The platform incorporates a structural frame that has been designed to integrate with the payload frame onboard the new Masten Xodiac vertical take-o, vertical landing (VTVL) terrestrial rocket vehicle. Ground integration and testing is underway, and terrestrial fight testing onboard Xodiac is planned for 2017 with two flight campaigns: one open-loop and one closed-loop.
The Visual Representation and Acquisition of Driving Knowledge for Autonomous Vehicle
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxia; Jiang, Qing; Li, Ping; Song, LiangTu; Wang, Rujing; Yu, Biao; Mei, Tao
2017-09-01
In this paper, the driving knowledge base of autonomous vehicle is designed. Based on the driving knowledge modeling system, the driving knowledge of autonomous vehicle is visually acquired, managed, stored, and maintenanced, which has vital significance for creating the development platform of intelligent decision-making systems of automatic driving expert systems for autonomous vehicle.
Knowledge-based and integrated monitoring and diagnosis in autonomous power systems
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A new technique of knowledge-based and integrated monitoring and diagnosis (KBIMD) to deal with abnormalities and incipient or potential failures in autonomous power systems is presented. The KBIMD conception is discussed as a new function of autonomous power system automation. Available diagnostic modelling, system structure, principles and strategies are suggested. In order to verify the feasibility of the KBIMD, a preliminary prototype expert system is designed to simulate the KBIMD function in a main electric network of the autonomous power system.
Mobile robot exploration and navigation of indoor spaces using sonar and vision
NASA Technical Reports Server (NTRS)
Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth
1994-01-01
Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.
The Evolving Terrorist Threat to Southeast Asia. A Net Assessment
2009-01-01
President Fidel Ramos . Infor- mally referred to as the Davao Consenus, this provided for the creation of a limited Autonomous Region of Muslim Mindanao... Ramos Vision of Summit Glory,” 1996; and “The Man Who Wasn’t There,” 1995. 46 A number of these kidnappings proved to be highly profitable...at the Islamic University of Medina in Saudi Arabia that makes him one of the most qualified reli- gious authority figures in Sulu” (ICG, 2008b, p
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
NASA Astrophysics Data System (ADS)
Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois
2010-01-01
After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.
A Face Attention Technique for a Robot Able to Interpret Facial Expressions
NASA Astrophysics Data System (ADS)
Simplício, Carlos; Prado, José; Dias, Jorge
Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes
NASA Astrophysics Data System (ADS)
Denasi, Sandra; Quaglia, Giorgio
1993-08-01
Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.
Right Orbitofrontal Cortex Mediates Conscious Olfactory Perception
Li, Wen; Lopez, Leonardo; Osher, Jason; Howard, James D.; Parrish, Todd B.; Gottfried, Jay A.
2013-01-01
Understanding how the human brain translates sensory impressions into conscious percepts is a key challenge of neuroscience research. Work in this area has overwhelmingly centered on the conscious experience of vision at the exclusion of the other senses—in particular, smell. We hypothesized that the orbitofrontal cortex (OFC) is a central substrate for olfactory conscious experience because of its privileged physiological role in odor processing. Combining functional magnetic resonance imaging, peripheral autonomic recordings, and olfactory psychophysics, we studied a case of complete anosmia (smell loss) in a patient with circumscribed traumatic brain injury to the right OFC. Despite a complete absence of conscious olfaction, the patient exhibited robust “blind smell,” as indexed by reliable odor-evoked neural activity in the left OFC and normal autonomic responses to odor hedonics during presentation of stimuli to the left nostril. These data highlight the right OFC’s critical role in subserving human olfactory consciousness. PMID:20817780
Autonomous control systems - Architecture and fundamental issues
NASA Technical Reports Server (NTRS)
Antsaklis, P. J.; Passino, K. M.; Wang, S. J.
1988-01-01
A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).
Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Propulsion Controls and Diagnostics Research at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Garg, Sanjay
2007-01-01
With the increased emphasis on aircraft safety, enhanced performance and affordability, and the need to reduce the environmental impact of aircraft, there are many new challenges being faced by the designers of aircraft propulsion systems. Also the propulsion systems required to enable the National Aeronautics and Space Administration (NASA) Vision for Space Exploration in an affordable manner will need to have high reliability, safety and autonomous operation capability. The Controls and Dynamics Branch (CDB) at NASA Glenn Research Center (GRC) in Cleveland, Ohio, is leading and participating in various projects in partnership with other organizations within GRC and across NASA, the U.S. aerospace industry, and academia to develop advanced controls and health management technologies that will help meet these challenges through the concept of Intelligent Propulsion Systems. This paper describes the current activities of the CDB under the NASA Aeronautics Research and Exploration Systems Missions. The programmatic structure of the CDB activities is described along with a brief overview of each of the CDB tasks including research objectives, technical challenges, and recent accomplishments. These tasks include active control of propulsion system components, intelligent propulsion diagnostics and control for reliable fault identification and accommodation, distributed engine control, and investigations into unsteady propulsion systems.
A Concept of Operations for an Integrated Vehicle Health Assurance System
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Ross, Richard W.; Berger, David E.; Lekki, John D.; Mah, Robert W.; Perey, Danie F.; Schuet, Stefan R.; Simon, Donald L.; Smith, Stephen W.
2013-01-01
This document describes a Concept of Operations (ConOps) for an Integrated Vehicle Health Assurance System (IVHAS). This ConOps is associated with the Maintain Vehicle Safety (MVS) between Major Inspections Technical Challenge in the Vehicle Systems Safety Technologies (VSST) Project within NASA s Aviation Safety Program. In particular, this document seeks to describe an integrated system concept for vehicle health assurance that integrates ground-based inspection and repair information with in-flight measurement data for airframe, propulsion, and avionics subsystems. The MVS Technical Challenge intends to maintain vehicle safety between major inspections by developing and demonstrating new integrated health management and failure prevention technologies to assure the integrity of vehicle systems between major inspection intervals and maintain vehicle state awareness during flight. The approach provided by this ConOps is intended to help optimize technology selection and development, as well as allow the initial integration and demonstration of these subsystem technologies over the 5 year span of the VSST program, and serve as a guideline for developing IVHAS technologies under the Aviation Safety Program within the next 5 to 15 years. A long-term vision of IVHAS is provided to describe a basic roadmap for more intelligent and autonomous vehicle systems.
Infrared sensors and systems for enhanced vision/autonomous landing applications
NASA Technical Reports Server (NTRS)
Kerr, J. Richard
1993-01-01
There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.
Agent Technology, Complex Adaptive Systems, and Autonomic Systems: Their Relationships
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Rash, James; Rouff, Chistopher; Hincheny, Mike
2004-01-01
To reduce the cost of future spaceflight missions and to perform new science, NASA has been investigating autonomous ground and space flight systems. These goals of cost reduction have been further complicated by nanosatellites for future science data-gathering which will have large communications delays and at times be out of contact with ground control for extended periods of time. This paper describes two prototype agent-based systems, the Lights-out Ground Operations System (LOGOS) and the Agent Concept Testbed (ACT), and their autonomic properties that were developed at NASA Goddard Space Flight Center (GSFC) to demonstrate autonomous operations of future space flight missions. The paper discusses the architecture of the two agent-based systems, operational scenarios of both, and the two systems autonomic properties.
KAM tori and whiskered invariant tori for non-autonomous systems
NASA Astrophysics Data System (ADS)
Canadell, Marta; de la Llave, Rafael
2015-08-01
We consider non-autonomous dynamical systems which converge to autonomous (or periodic) systems exponentially fast in time. Such systems appear naturally as models of many physical processes affected by external pulses. We introduce definitions of non-autonomous invariant tori and non-autonomous whiskered tori and their invariant manifolds and we prove their persistence under small perturbations, smooth dependence on parameters and several geometric properties (if the systems are Hamiltonian, the tori are Lagrangian manifolds). We note that such definitions are problematic for general time-dependent systems, but we show that they are unambiguous for systems converging exponentially fast to autonomous. The proof of persistence relies only on a standard Implicit Function Theorem in Banach spaces and it does not require that the rotations in the tori are Diophantine nor that the systems we consider preserve any geometric structure. We only require that the autonomous system preserves these objects. In particular, when the autonomous system is integrable, we obtain the persistence of tori with rational rotational. We also discuss fast and efficient algorithms for their computation. The method also applies to infinite dimensional systems which define a good evolution, e.g. PDE's. When the systems considered are Hamiltonian, we show that the time dependent invariant tori are isotropic. Hence, the invariant tori of maximal dimension are Lagrangian manifolds. We also obtain that the (un)stable manifolds of whiskered tori are Lagrangian manifolds. We also include a comparison with the more global theory developed in Blazevski and de la Llave (2011).
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Human physiological responses to wooden indoor environment.
Zhang, Xi; Lian, Zhiwei; Wu, Yong
2017-05-15
Previous studies are mainly focused on non-wooden environments, whereas few are concerned with wooden ones. How wooden indoor environments impact the physiology of the occupants is still unclear. The purpose of this study was to explore the distinct physiological responses to wooden and non-wooden indoor environments, assessed by physiological parameters tests including blood pressure, electrocardiogram measurements, electro-dermal activity, oxyhemoglobin saturation, skin temperature, and near distance vision. Twenty healthy adults participated in this experiment, and their physiological responses were evaluated in a 90minute investigation. The results illustrated that; less tension and fatigue were generated in the wooden rooms than in the non-wooden rooms when the participants did their work. In addition, the study also found that the wooden environments benefit the autonomic nervous system, respiratory system, and visual system. Moreover, wooden rooms play a valuable role in physiological regulation and ease function especially after a consecutive period of work. These results provide an experimental basis to support that wooden environment is beneficial to indoor occupants than the non-wooden indoor environment. Copyright © 2017 Elsevier Inc. All rights reserved.
Cybersecurity for aerospace autonomous systems
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2015-05-01
High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.
Insights into the background of autonomic medicine.
Laranjo, Sérgio; Geraldes, Vera; Oliveira, Mário; Rocha, Isabel
2017-10-01
Knowledge of the physiology underlying the autonomic nervous system is pivotal for understanding autonomic dysfunction in clinical practice. Autonomic dysfunction may result from primary modifications of the autonomic nervous system or be secondary to a wide range of diseases that cause severe morbidity and mortality. Together with a detailed history and physical examination, laboratory assessment of autonomic function is essential for the analysis of various clinical conditions and the establishment of effective, personalized and precise therapeutic schemes. This review summarizes the main aspects of autonomic medicine that constitute the background of cardiovascular autonomic dysfunction. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Towards autonomous fuzzy control
NASA Technical Reports Server (NTRS)
Shenoi, Sujeet; Ramer, Arthur
1993-01-01
The efficient implementation of on-line adaptation in real time is an important research problem in fuzzy control. The goal is to develop autonomous self-organizing controllers employing system-independent control meta-knowledge which enables them to adjust their control policies depending on the systems they control and the environments in which they operate. An autonomous fuzzy controller would continuously observe system behavior while implementing its control actions and would use the outcomes of these actions to refine its control policy. It could be designed to lie dormant when its control actions give rise to adequate performance characteristics but could rapidly and autonomously initiate real-time adaptation whenever its performance degrades. Such an autonomous fuzzy controller would have immense practical value. It could accommodate individual variations in system characteristics and also compensate for degradations in system characteristics caused by wear and tear. It could also potentially deal with black-box systems and control scenarios. On-going research in autonomous fuzzy control is reported. The ultimate research objective is to develop robust and relatively inexpensive autonomous fuzzy control hardware suitable for use in real time environments.
Autonomous and Autonomic Systems: A Paradigm for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Truszkowski, Walter F.; Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
NASA increasingly will rely on autonomous systems concepts, not only in the mission control centers on the ground, but also on spacecraft and on rovers and other assets on extraterrestrial bodies. Automomy enables not only reduced operations costs, But also adaptable goal-driven functionality of mission systems. Space missions lacking autonomy will be unable to achieve the full range of advanced mission objectives, given that human control under dynamic environmental conditions will not be feasible due, in part, to the unavoidably high signal propagation latency and constrained data rates of mission communications links. While autonomy cost-effectively supports accomplishment of mission goals, autonomicity supports survivability of remote mission assets, especially when human tending is not feasible. Autonomic system properties (which ensure self-configuring, self-optimizing self-healing, and self-protecting behavior) conceptually may enable space missions of a higher order into any previously flown. Analysis of two NASA agent-based systems previously prototyped, and of a proposed future mission involving numerous cooperating spacecraft, illustrates how autonomous and autonomic system concepts may be brought to bear on future space missions.
Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed
NASA Technical Reports Server (NTRS)
Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles
2016-01-01
The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.
Information for Successful Interaction with Autonomous Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Johnson, Kathy A.
2003-01-01
Interaction in heterogeneous mission operations teams is not well matched to classical models of coordination with autonomous systems. We describe methods of loose coordination and information management in mission operations. We describe an information agent and information management tool suite for managing information from many sources, including autonomous agents. We present an integrated model of levels of complexity of agent and human behavior, which shows types of information processing and points of potential error in agent activities. We discuss the types of information needed for diagnosing problems and planning interactions with an autonomous system. We discuss types of coordination for which designs are needed for autonomous system functions.
Bayes filter modification for drivability map estimation with observations from stereo vision
NASA Astrophysics Data System (ADS)
Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri
2017-02-01
Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.
Gaze-contingent soft tissue deformation tracking for minimally invasive robotic surgery.
Mylonas, George P; Stoyanov, Danail; Deligianni, Fani; Darzi, Ara; Yang, Guang-Zhong
2005-01-01
The introduction of surgical robots in Minimally Invasive Surgery (MIS) has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. Although fully autonomous robots are attractive, both ethical and legal barriers can prohibit their practical use in surgery. The purpose of this paper is to demonstrate that it is possible to use real-time binocular eye tracking for empowering robots with human vision by using knowledge acquired in situ. By utilizing the close relationship between the horizontal disparity and the depth perception varying with the viewing distance, it is possible to use ocular vergence for recovering 3D motion and deformation of the soft tissue during MIS procedures. Both phantom and in vivo experiments were carried out to assess the potential frequency limit of the system and its intrinsic depth recovery accuracy. The potential applications of the technique include motion stabilization and intra-operative planning in the presence of large tissue deformation.
Autonomous characterization of plastic-bonded explosives
NASA Astrophysics Data System (ADS)
Linder, Kim Dalton; DeRego, Paul; Gomez, Antonio; Baumgart, Chris
2006-08-01
Plastic-Bonded Explosives (PBXs) are a newer generation of explosive compositions developed at Los Alamos National Laboratory (LANL). Understanding the micromechanical behavior of these materials is critical. The size of the crystal particles and porosity within the PBX influences their shock sensitivity. Current methods to characterize the prominent structural characteristics include manual examination by scientists and attempts to use commercially available image processing packages. Both methods are time consuming and tedious. LANL personnel, recognizing this as a manually intensive process, have worked with the Kansas City Plant / Kirtland Operations to develop a system which utilizes image processing and pattern recognition techniques to characterize PBX material. System hardware consists of a CCD camera, zoom lens, two-dimensional, motorized stage, and coaxial, cross-polarized light. System integration of this hardware with the custom software is at the core of the machine vision system. Fundamental processing steps involve capturing images from the PBX specimen, and extraction of void, crystal, and binder regions. For crystal extraction, a Quadtree decomposition segmentation technique is employed. Benefits of this system include: (1) reduction of the overall characterization time; (2) a process which is quantifiable and repeatable; (3) utilization of personnel for intelligent review rather than manual processing; and (4) significantly enhanced characterization accuracy.
Autonomous Navigation Using Celestial Objects
NASA Technical Reports Server (NTRS)
Folta, David; Gramling, Cheryl; Leung, Dominic; Belur, Sheela; Long, Anne
1999-01-01
In the twenty-first century, National Aeronautics and Space Administration (NASA) Enterprises envision frequent low-cost missions to explore the solar system, observe the universe, and study our planet. Satellite autonomy is a key technology required to reduce satellite operating costs. The Guidance, Navigation, and Control Center (GNCC) at the Goddard Space Flight Center (GSFC) currently sponsors several initiatives associated with the development of advanced spacecraft systems to provide autonomous navigation and control. Autonomous navigation has the potential both to increase spacecraft navigation system performance and to reduce total mission cost. By eliminating the need for routine ground-based orbit determination and special tracking services, autonomous navigation can streamline spacecraft ground systems. Autonomous navigation products can be included in the science telemetry and forwarded directly to the scientific investigators. In addition, autonomous navigation products are available onboard to enable other autonomous capabilities, such as attitude control, maneuver planning and orbit control, and communications signal acquisition. Autonomous navigation is required to support advanced mission concepts such as satellite formation flying. GNCC has successfully developed high-accuracy autonomous navigation systems for near-Earth spacecraft using NASA's space and ground communications systems and the Global Positioning System (GPS). Recently, GNCC has expanded its autonomous navigation initiative to include satellite orbits that are beyond the regime in which use of GPS is possible. Currently, GNCC is assessing the feasibility of using standard spacecraft attitude sensors and communication components to provide autonomous navigation for missions including: libration point, gravity assist, high-Earth, and interplanetary orbits. The concept being evaluated uses a combination of star, Sun, and Earth sensor measurements along with forward-link Doppler measurements from the command link carrier to autonomously estimate the spacecraft's orbit and reference oscillator's frequency. To support autonomous attitude determination and control and maneuver planning and control, the orbit determination accuracy should be on the order of kilometers in position and centimeters per second in velocity. A less accurate solution (one hundred kilometers in position) could be used for acquisition purposes for command and science downloads. This paper provides performance results for both libration point orbiting and high Earth orbiting satellites as a function of sensor measurement accuracy, measurement types, measurement frequency, initial state errors, and dynamic modeling errors.
Unified formalism for higher order non-autonomous dynamical systems
NASA Astrophysics Data System (ADS)
Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso
2012-03-01
This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.
Autonomous planetary rover at Carnegie Mellon
NASA Technical Reports Server (NTRS)
Whittaker, William; Kanade, Takeo; Mitchell, Tom
1990-01-01
This report describes progress in research on an autonomous robot for planetary exploration. In 1989, the year covered by this report, a six-legged walking robot, the Ambler, was configured, designed, and constructed. This configuration was used to overcome shortcomings exhibited by existing wheeled and walking robot mechanisms. The fundamental advantage of the Ambler is that the actuators for body support are independent of those for propulsion; a subset of the planar joints propel the body, and the vertical actuators support and level the body over terrain. Models of the Ambler's dynamics were developed and the leveling control was studied. An integrated system capable of walking with a single leg over rugged terrain was implemented and tested. A prototype of an Ambler leg is suspended below a carriage that slides along rails. To walk, the system uses a laser scanner to find a clear, flat foothold, positions the leg above the foothold, contacts the terrain with the foot, and applies force enough to advance the carriage along the rails. Walking both forward and backward, the system has traversed hundreds of meters of rugged terrain including obstacles too tall to step over, trenches too deep to step in, closely spaced rocks, and sand hills. In addition, preliminary experiments were conducted with concurrent planning and execution, and a leg recovery planner that generates time and power efficient 3D trajectories using 2D search was developed. A Hero robot was used to demonstrate mobile manipulation. Indoor tasks include collecting cups from the lab floor, retrieving printer output, and recharging when its battery gets low. The robot monitors its environment, and handles exceptional conditions in a robust fashion, using vision to track the appearance and disappearance of cups, onboard sonars to detect imminent collisions, and monitors to detect the battery level.
Engineering Review of ANCAUS/AVATAR: An Enabling Technology for the Autonomous Land Systems Program?
2003-12-01
technology for future Autonomous Land System (ALS) autonomous vehicles . Since 1989, forward thinking engineering has characterized the history of ANC/EUS and...technology for future autonomous vehicles and that; (2) ALS should adopt commercial/open source technology to support a new ALS architecture and (3) ALS
Autonomous Agents: The Origins and Co-Evolution of Reproducing Molecular Systems
NASA Technical Reports Server (NTRS)
Kauffman, Stuart
1999-01-01
The central aim of this award concerned an investigation into, and adequate formulation of, the concept of an "autonomous agent." If we consider a bacterium swimming upstream in a glucose gradient, we are willing to say of the bacterium that it is going to get food. That is, we are willing, and do, describe the bacterium as acting on its own behalf in an environment. All free living cells are, in this sense, autonomous agents. But the bacterium is "just" a set of molecules. We define an autonomous agent as a physical system able to act on its own behalf in an environment, then ask, "What must a physical system be to be an autonomous agent?" The tentative definition for a molecular autonomous agent is that it must be self-reproducing and carry out at least one thermodynamic work cycle. The work carried out in this grant involved, among other features, the development of a detailed model of a molecular autonomous agent, and study of the kinetics of this system. In particular, a molecular autonomous agent must, by the above tentative definition, not only reproduce, but must carry out at least one work cycle. I took, as a simple example of a self-reproducing molecular system, the single-stranded DNA hexamer 3'CCGCGG5' which can line up and ligate its two complementary trimers, 5'CCG3' and 5'CGG3'. But the two ligated trimers constitute the same molecular sequence in the 3' to 5' direction as the initial hexamer, hence this system is autocatalytic. On the other hand the above system is not yet an autonomous agent. At the minimum, autonomous agents, as I have defined them, are a new class of chemical reaction network. At a maximum, they may constitute a proper definition of life itself.
Swarm autonomic agents with self-destruct capability
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)
2009-01-01
Systems, methods and apparatus are provided through which in some embodiments an autonomic entity manages a system by generating one or more stay alive signals based on the functioning status and operating state of the system. In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy. The evolvable neural interface receives and generates heartbeat monitor signals and pulse monitor signals that are used to generate a stay alive signal that is used to manage the operations of the synthetic neural system. In another embodiment an asynchronous Alice signal (Autonomic license) requiring valid credentials of an anonymous autonomous agent is initiated. An unsatisfactory Alice exchange may lead to self-destruction of the anonymous autonomous agent for self-protection.