An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Application of aircraft navigation sensors to enhanced vision systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.
1993-01-01
In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
NASA Technical Reports Server (NTRS)
Christian, John A.; Patangan, Mogi; Hinkel, Heather; Chevray, Keiko; Brazzel, Jack
2012-01-01
The Orion Multi-Purpose Crew Vehicle is a new spacecraft being designed by NASA and Lockheed Martin for future crewed exploration missions. The Vision Navigation Sensor is a Flash LIDAR that will be the primary relative navigation sensor for this vehicle. To obtain a better understanding of this sensor's performance, the Orion relative navigation team has performed both flight tests and ground tests. This paper summarizes and compares the performance results from the STS-134 flight test, called the Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective, and the ground tests at the Space Operations Simulation Center.
NASA Astrophysics Data System (ADS)
Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung
2017-12-01
In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.
NASA Astrophysics Data System (ADS)
Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.
2015-09-01
Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.
Improving CAR Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Improving Car Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
Vision Based Navigation for Autonomous Cooperative Docking of CubeSats
NASA Astrophysics Data System (ADS)
Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker
2018-05-01
A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
Vision Sensor-Based Road Detection for Field Robot Navigation
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-01-01
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514
Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
Landmark-aided localization for air vehicles using learned object detectors
NASA Astrophysics Data System (ADS)
DeAngelo, Mark Patrick
This research presents two methods to localize an aircraft without GPS using fixed landmarks observed from an optical sensor. Onboard absolute localization is useful for vehicle navigation free from an external network. The objective is to achieve practical navigation performance using available autopilot hardware and a downward pointing camera. The first method uses computer vision cascade object detectors, which are trained to detect predetermined, distinct landmarks prior to a flight. The first method also concurrently explores aircraft localization using roads between landmark updates. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement updates when landmarks are detected. The sensor measurements and landmark coordinates extracted from the aircraft's camera images are combined into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities. The second method uses computer vision object detectors to detect abundant generic landmarks referred as buildings, fields, trees, and road intersections from aerial perspectives. Various landmark attributes and spatial relationships to other landmarks are used to help associate observed landmarks with reference landmarks. The computer vision algorithms automatically extract reference landmarks from maps, which are processed offline before a flight. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement corrections by processing aerial photos with similar generic landmark detection techniques. The method also combines sensor measurements and landmark coordinates into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities.
A Bionic Polarization Navigation Sensor and Its Calibration Method.
Zhao, Huijie; Xu, Wujian
2016-08-03
The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
Seamless positioning and navigation by using geo-referenced images and multi-sensor data.
Li, Xun; Wang, Jinling; Li, Tao
2013-07-12
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.
Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data
Li, Xun; Wang, Jinling; Li, Tao
2013-01-01
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267
A Bionic Polarization Navigation Sensor and Its Calibration Method
Zhao, Huijie; Xu, Wujian
2016-01-01
The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171
Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation
2015-06-01
Multiple-Purpose Crew Vehicle (MPVC), which will be provided with a LIDAR sensor as primary relative navigation system [26, 33, 34]. A drawback of LIDAR...328–352, 2009. [63] C. Luigini and M. Romano, “A ballistic- pendulum test stand to characterize small cold-gas thruster nozzles,” Acta
Draper Laboratory small autonomous aerial vehicle
NASA Astrophysics Data System (ADS)
DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.
1997-06-01
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
Multi-Sensor Person Following in Low-Visibility Scenarios
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506
Multi-sensor person following in low-visibility scenarios.
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.
Open-Loop Performance of COBALT Precision Landing Payload on a Commercial Sub-Orbital Rocket
NASA Technical Reports Server (NTRS)
Restrepo, Carolina I.; Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Lovelace, Ronney S.; McCarthy, Megan M.; Tse, Teming; Stelling, Richard; Collins, Steven M.
2018-01-01
An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a navigation solution that is independent of GPS and suitable for future, autonomous, planetary, landing systems. COBALT was a passive payload during the open loop tests. COBALT's sensors were actively taking data and processing it in real time, but the Xodiac rocket flew with its own GPS-navigation system as a risk reduction activity in the maturation of the technologies towards space flight. A future closed-loop test campaign is planned where the COBALT navigation solution will be used to fly its host vehicle.
Computer-aided system for detecting runway incursions
NASA Astrophysics Data System (ADS)
Sridhar, Banavar; Chatterji, Gano B.
1994-07-01
A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.
Integrated navigation, flight guidance, and synthetic vision system for low-level flight
NASA Astrophysics Data System (ADS)
Mehler, Felix E.
2000-06-01
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
Open-Loop Flight Testing of COBALT Navigation and Sensor Technologies for Precise Soft Landing
NASA Technical Reports Server (NTRS)
Carson, John M., III; Restrepo, Caroline I.; Seubert, Carl R.; Amzajerdian, Farzin; Pierrottet, Diego F.; Collins, Steven M.; O'Neal, Travis V.; Stelling, Richard
2017-01-01
An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) payload was conducted onboard the Masten Xodiac suborbital rocket testbed. The payload integrates two complementary sensor technologies that together provide a spacecraft with knowledge during planetary descent and landing to precisely navigate and softly touchdown in close proximity to targeted surface locations. The two technologies are the Navigation Doppler Lidar (NDL), for high-precision velocity and range measurements, and the Lander Vision System (LVS) for map-relative state esti- mates. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a very precise Terrain Relative Navigation (TRN) solution that is suitable for future, autonomous planetary landing systems that require precise and soft landing capabilities. During the open-loop flight campaign, the COBALT payload acquired measurements and generated a precise navigation solution, but the Xodiac vehicle planned and executed its maneuvers based on an independent, GPS-based navigation solution. This minimized the risk to the vehicle during the integration and testing of the new navigation sensing technologies within the COBALT payload.
Advanced integrated enhanced vision systems
NASA Astrophysics Data System (ADS)
Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha
2003-09-01
In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.
Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles
NASA Technical Reports Server (NTRS)
Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick
2012-01-01
Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.
A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor
Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.
2015-01-01
For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135
Data Analysis Techniques for a Lunar Surface Navigation System Testbed
NASA Technical Reports Server (NTRS)
Chelmins, David; Sands, O. Scott; Swank, Aaron
2011-01-01
NASA is interested in finding new methods of surface navigation to allow astronauts to navigate on the lunar surface. In support of the Vision for Space Exploration, the NASA Glenn Research Center developed the Lunar Extra-Vehicular Activity Crewmember Location Determination System and performed testing at the Desert Research and Technology Studies event in 2009. A significant amount of sensor data was recorded during nine tests performed with six test subjects. This paper provides the procedure, formulas, and techniques for data analysis, as well as commentary on applications.
Autonomous Vision Navigation for Spacecraft in Lunar Orbit
NASA Astrophysics Data System (ADS)
Bader, Nolan A.
NASA aims to achieve unprecedented navigational reliability for the first manned lunar mission of the Orion spacecraft in 2023. A technique for accomplishing this is to integrate autonomous feature tracking as an added means of improving position and velocity estimation. In this thesis, a template matching algorithm and optical sensor are tested onboard three simulated lunar trajectories using linear covariance techniques under various conditions. A preliminary characterization of the camera gives insight into its ability to determine azimuth and elevation angles to points on the surface of the Moon. A navigation performance analysis shows that an optical camera sensor can aid in decreasing position and velocity errors, particularly in a loss of communication scenario. Furthermore, it is found that camera quality and computational capability are driving factors affecting the performance of such a system.
Evaluation of novel technologies for the miniaturization of flash imaging lidar
NASA Astrophysics Data System (ADS)
Mitev, V.; Pollini, A.; Haesler, J.; Perenzoni, D.; Stoppa, D.; Kolleck, Christian; Chapuy, M.; Kervendal, E.; Pereira do Carmo, João.
2017-11-01
Planetary exploration constitutes one of the main components in the European Space activities. Missions to Mars, Moon and asteroids are foreseen where it is assumed that the human missions shall be preceded by robotic exploitation flights. The 3D vision is recognised as a key enabling technology in the relative proximity navigation of the space crafts, where imaging LiDAR is one of the best candidates for such 3D vision sensor.
Insect-Inspired Optical-Flow Navigation Sensors
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven
2005-01-01
Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
High resolution hybrid optical and acoustic sea floor maps (Invited)
NASA Astrophysics Data System (ADS)
Roman, C.; Inglis, G.
2013-12-01
This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.
The Sensor Test for Orion RelNav Risk Mitigation Development Test Objective
NASA Technical Reports Server (NTRS)
Christian, John A.; Hinkel, Heather; Maguire, Sean
2011-01-01
The Sensor Test for Orion Relative-Navigation Risk Mitigation (STORRM) Development Test Objective (DTO) ew aboard the Space Shuttle Endeavour on STS-134, and was designed to characterize the performance of the ash LIDAR being developed for the Orion. This ash LIDAR, called the Vision Navigation Sensor (VNS), will be the primary navigation instrument used by the Orion vehicle during rendezvous, proximity operations, and docking. This paper provides an overview of the STORRM test objectives and the concept of operations. It continues with a description of the STORRM's major hardware compo nents, which include the VNS and the docking camera. Next, an overview of crew and analyst training activities will describe how the STORRM team prepared for flight. Then an overview of how insight data collection and analysis actually went is presented. Key ndings and results from this project are summarized, including a description of "truth" data. Finally, the paper concludes with lessons learned from the STORRM DTO.
Bioinspired optical sensors for unmanned aerial systems
NASA Astrophysics Data System (ADS)
Chahl, Javaan; Rosser, Kent; Mizutani, Akiko
2011-04-01
Insects are dependant on the spatial, spectral and temporal distributions of light in the environment for flight control and navigation. This paper reports on flight trials of implementations of insect inspired behaviors on unmanned aerial vehicles. Optical flow methods for maintaining a constant height above ground and a constant course have been demonstrated to provide navigation capabilities that are impossible using conventional avionics sensors. Precision control of height above ground and ground course were achieved over long distances. Other vision based techniques demonstrated include a biomimetic stabilization sensor that uses the ultraviolet and green bands of the spectrum, and a sky polarization compass. Both of these sensors were tested over long trajectories in different directions, in each case showing performance similar to low cost inertial heading and attitude systems. The behaviors demonstrate some of the core functionality found in the lower levels of the sensorimotor system of flying insects and shows promise for more integrated solutions in the future.
Vision-Based SLAM System for Unmanned Aerial Vehicles
Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni
2016-01-01
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, N.S.V.; Kareti, S.; Shi, Weimin
A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors considermore » the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.« less
Assistive obstacle detection and navigation devices for vision-impaired users.
Ong, S K; Zhang, J; Nee, A Y C
2013-09-01
Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.
Application of parallelized software architecture to an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam
2011-01-01
This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.
Máthé, Koppány; Buşoniu, Lucian
2015-01-01
Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608
Landmark navigation and autonomous landing approach with obstacle detection for aircraft
NASA Astrophysics Data System (ADS)
Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.
1997-06-01
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
The Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective
NASA Technical Reports Server (NTRS)
Christian, John A.; Hinkel, Heather; D'Souza, Christopher N.; Maguire, Sean; Patangan, Mogi
2011-01-01
The Sensor Test for Orion Relative-Navigation Risk Mitigation (STORRM) Development Test Objective (DTO) flew aboard the Space Shuttle Endeavour on STS-134 in May- June 2011, and was designed to characterize the performance of the flash LIDAR and docking camera being developed for the Orion Multi-Purpose Crew Vehicle. The flash LIDAR, called the Vision Navigation Sensor (VNS), will be the primary navigation instrument used by the Orion vehicle during rendezvous, proximity operations, and docking. The DC will be used by the Orion crew for piloting cues during docking. This paper provides an overview of the STORRM test objectives and the concept of operations. It continues with a description of STORRM's major hardware components, which include the VNS, docking camera, and supporting avionics. Next, an overview of crew and analyst training activities will describe how the STORRM team prepared for flight. Then an overview of in-flight data collection and analysis is presented. Key findings and results from this project are summarized. Finally, the paper concludes with lessons learned from the STORRM DTO.
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
FLASH LIDAR Based Relative Navigation
NASA Technical Reports Server (NTRS)
Brazzel, Jack; Clark, Fred; Milenkovic, Zoran
2014-01-01
Relative navigation remains the most challenging part of spacecraft rendezvous and docking. In recent years, flash LIDARs, have been increasingly selected as the go-to sensors for proximity operations and docking. Flash LIDARS are generally lighter and require less power that scanning Lidars. Flash LIDARs do not have moving parts, and they are capable of tracking multiple targets as well as generating a 3D map of a given target. However, there are some significant drawbacks of Flash Lidars that must be resolved if their use is to be of long-term significance. Overcoming the challenges of Flash LIDARs for navigation-namely, low technology readiness level, lack of historical performance data, target identification, existence of false positives, and performance of vision processing algorithms as intermediaries between the raw sensor data and the Kalman filter-requires a world-class testing facility, such as the Lockheed Martin Space Operations Simulation Center (SOSC). Ground-based testing is a critical step for maturing the next-generation flash LIDAR-based spacecraft relative navigation. This paper will focus on the tests of an integrated relative navigation system conducted at the SOSC in January 2014. The intent of the tests was to characterize and then improve the performance of relative navigation, while addressing many of the flash LIDAR challenges mentioned above. A section on navigation performance and future recommendation completes the discussion.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Progress in Insect-Inspired Optical Navigation Sensors
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Chahl, Javaan; Zometzer, Steve
2005-01-01
Progress has been made in continuing efforts to develop optical flight-control and navigation sensors for miniature robotic aircraft. The designs of these sensors are inspired by the designs and functions of the vision systems and brains of insects. Two types of sensors of particular interest are polarization compasses and ocellar horizon sensors. The basic principle of polarization compasses was described (but without using the term "polarization compass") in "Insect-Inspired Flight Control for Small Flying Robots" (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate: Bees use sky polarization patterns in ultraviolet (UV) light, caused by Rayleigh scattering of sunlight by atmospheric gas molecules, as direction references relative to the apparent position of the Sun. A robotic direction-finding technique based on this concept would be more robust in comparison with a technique based on the direction to the visible Sun because the UV polarization pattern is distributed across the entire sky and, hence, is redundant and can be extrapolated from a small region of clear sky in an elsewhere cloudy sky that hides the Sun.
NASA Astrophysics Data System (ADS)
Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank
2005-05-01
Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.
Vision based techniques for rotorcraft low altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Suorsa, Ray; Smith, Philip
1991-01-01
An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.
The Effects of Synthetic and Enhanced Vision Technologies for Lunar Landings
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Norman, Robert M.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III; Shelton, Kevin J.; Williams, Steven P.
2009-01-01
Eight pilots participated as test subjects in a fixed-based simulation experiment to evaluate advanced vision display technologies such as Enhanced Vision (EV) and Synthetic Vision (SV) for providing terrain imagery on flight displays in a Lunar Lander Vehicle. Subjects were asked to fly 20 approaches to the Apollo 15 lunar landing site with four different display concepts - Baseline (symbology only with no terrain imagery), EV only (terrain imagery from Forward Looking Infra Red, or FLIR, and LIght Detection and Ranging, or LIDAR, sensors), SV only (terrain imagery from onboard database), and Fused EV and SV concepts. As expected, manual landing performance was excellent (within a meter of landing site center) and not affected by the inclusion of EV or SV terrain imagery on the Lunar Lander flight displays. Subjective ratings revealed significant situation awareness improvements with the concepts employing EV and/or SV terrain imagery compared to the Baseline condition that had no terrain imagery. In addition, display concepts employing EV imagery (compared to the SV and Baseline concepts which had none) were significantly better for pilot detection of intentional but unannounced navigation failures since this imagery provided an intuitive and obvious visual methodology to monitor the validity of the navigation solution.
COBALT: Development of a Platform to Flight Test Lander GN&C Technologies on Suborbital Rockets
NASA Technical Reports Server (NTRS)
Carson, John M., III; Seubert, Carl R.; Amzajerdian, Farzin; Bergh, Chuck; Kourchians, Ara; Restrepo, Carolina I.; Villapando, Carlos Y.; O'Neal, Travis V.; Robertson, Edward A.; Pierrottet, Diego;
2017-01-01
The NASA COBALT Project (CoOperative Blending of Autonomous Landing Technologies) is developing and integrating new precision-landing Guidance, Navigation and Control (GN&C) technologies, along with developing a terrestrial fight-test platform for Technology Readiness Level (TRL) maturation. The current technologies include a third- generation Navigation Doppler Lidar (NDL) sensor for ultra-precise velocity and line- of-site (LOS) range measurements, and the Lander Vision System (LVS) that provides passive-optical Terrain Relative Navigation (TRN) estimates of map-relative position. The COBALT platform is self contained and includes the NDL and LVS sensors, blending filter, a custom compute element, power unit, and communication system. The platform incorporates a structural frame that has been designed to integrate with the payload frame onboard the new Masten Xodiac vertical take-o, vertical landing (VTVL) terrestrial rocket vehicle. Ground integration and testing is underway, and terrestrial fight testing onboard Xodiac is planned for 2017 with two flight campaigns: one open-loop and one closed-loop.
Real-time Implementation of Vision, Inertial, and GPS Sensors to Navigate in an Urban Environment
2015-03-01
25) where RN is the meridian radius of curvature, RE is the transverse radius of the curvature, e is the major eccentricity of the ellipsoid, R is the...for On-Road Vehicles with 1- Point RANSAC [17]. Scaramuzza/et al discuss the use of nonholonomic constraints of a wheeled vehicle, that has an imagery
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.
2003-01-01
A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.
Autonomous navigation and control of a Mars rover
NASA Technical Reports Server (NTRS)
Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.
1990-01-01
A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.
COBALT Flight Demonstrations Fuse Technologies
2017-06-07
This 5-minute, 50-second video shows how the CoOperative Blending of Autonomous Landing Technologies (COBALT) system pairs new landing sensor technologies that promise to yield the highest precision navigation solution ever tested for NASA space landing applications. The technologies included a navigation doppler lidar (NDL), which provides ultra-precise velocity and line-of-sight range measurements, and the Lander Vision System (LVS), which provides terrain-relative navigation. Through flight campaigns conducted in March and April 2017 aboard Masten Space Systems' Xodiac, a rocket-powered vertical takeoff, vertical landing (VTVL) platform, the COBALT system was flight tested to collect sensor performance data for NDL and LVS and to check the integration and communication between COBALT and the rocket. The flight tests provided excellent performance data for both sensors, as well as valuable information on the integrated performance with the rocket that will be used for subsequent COBALT modifications prior to follow-on flight tests. Based at NASA’s Armstrong Flight Research Center in Edwards, CA, the Flight Opportunities program funds technology development flight tests on commercial suborbital space providers of which Masten is a vendor. The program has previously tested the LVS on the Masten rocket and validated the technology for the Mars 2020 rover.
IPS - a vision aided navigation system
NASA Astrophysics Data System (ADS)
Börner, Anko; Baumbach, Dirk; Buder, Maximilian; Choinowski, Andre; Ernst, Ines; Funk, Eugen; Grießbach, Denis; Schischmanow, Adrian; Wohlfeil, Jürgen; Zuev, Sergey
2017-04-01
Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one's own position, can guidance be provided, inspections be executed, and autonomous vehicles be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on the human head and its navigation sensors - the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and an inertial measurement unit for determining an ego pose in six degrees of freedom in a local coordinate system. IPS is able to operate in real time and can be applied for indoor and outdoor scenarios without any external reference or prior knowledge. In this paper, the system and its key hardware and software components are introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The developer team started from scratch and transfers this technology into a commercial product right now. The paper finishes with an outlook.
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Definition of display/control requirements for assault transport night/adverse weather capability
NASA Technical Reports Server (NTRS)
Milelli, R. J.; Mowery, G. W.; Pontelandolfo, C.
1982-01-01
A Helicopter Night Vision System was developed to improve low-altitude night and/or adverse weather assult transport capabilities. Man-in-the-loop simulation experiments were performed to define the minimum display and control requirements for the assult transport mission and investigate forward looking infrared sensor requirements, along with alternative displays such as panel mounted displays (PMD) helmet mounted displays (HMD), and integrated control display units. Also explored were navigation requirements, pilot/copilot interaction, and overall cockpit arrangement. Pilot use of an HMD and copilot use of a PMD appear as both the preferred and most effective night navigation combination.
Navigation studies based on the ubiquitous positioning technologies
NASA Astrophysics Data System (ADS)
Ye, Lei; Mi, Weijie; Wang, Defeng
2007-11-01
This paper summarized the nowadays positioning technologies, such as absolute positioning methods and relative positioning methods, indoor positioning and outdoor positioning, active positioning and passive positioning. Global Navigation Satellite System (GNSS) technologies were introduced as the omnipresent out-door positioning technologies, including GPS, GLONASS, Galileo and BD-1/2. After analysis of the shortcomings of GNSS, indoor positioning technologies were discussed and compared, including A-GPS, Cellular network, Infrared, Electromagnetism, Computer Vision Cognition, Embedded Pressure Sensor, Ultrasonic, RFID (Radio Frequency IDentification), Bluetooth, WLAN etc.. Then the concept and characteristics of Ubiquitous Positioning was proposed. After the ubiquitous positioning technologies contrast and selection followed by system engineering methodology, a navigation system model based on Incorporate Indoor-Outdoor Positioning Solution was proposed. And this model was simulated in the Galileo Demonstration for World Expo Shanghai project. In the conclusion, the prospects of ubiquitous positioning based navigation were shown, especially to satisfy the public location information acquiring requirement.
NASA Technical Reports Server (NTRS)
Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry
2012-01-01
Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.
NASA Technical Reports Server (NTRS)
Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.
2012-01-01
A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
Development of a Night Vision Goggle Heads-Up Display for Paratrooper Guidance
2008-06-01
and GPS data [MIC07]. requiring altitude, position, velocity, acceleration, and angular rates for navigation or control. An internal GPS receiver...Language There are several programming languages that provide the operating capabilities for this program. Languages like JAVA and C# provide an...acceleration, and angular rates. Figure 3.6 illustrates the MIDG hardware’s input and output data. The sensor actually generates the INS data, which is
Integrated Multi-Aperture Sensor and Navigation Fusion
2010-02-01
Visio, Springer-Verlag Inc., New York, 2004. [3] R. G. Brown and P. Y. C. Hwang , Introduction to Random Signals and Applied Kalman Filtering, Third...formulate Kalman filter vision/inertial measurement observables for other images without the need to know (or measure) their feature ranges. As compared...Internal Data Fusion Multi-aperture/INS data fusion is formulated in the feature domain using the complementary Kalman filter methodology [3]. In this
Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations
NASA Astrophysics Data System (ADS)
Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.
2016-04-01
This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; de Saussure, G.; Spelt, P.F.
1988-01-01
This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less
3-D Imaging Systems for Agricultural Applications—A Review
Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.
2016-01-01
Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560
Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision
NASA Astrophysics Data System (ADS)
Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire
2011-06-01
Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.
Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision
NASA Technical Reports Server (NTRS)
Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire
2011-01-01
Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.
Range Image Processing for Local Navigation of an Autonomous Land Vehicle.
1986-09-01
such as doing long term exploration missions on the surface of the planets which mankind may wish to investigate . Certainly, mankind will soon return...intelligence programming, walking technology, and vision sensors to name but a few. 10 The purpose of this thesis will be to investigate , by simulation...bitmap graphics, both of which are important to this simulation. Finally, the methodology for displaying the symbolic information generated by the
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
3D environment modeling and location tracking using off-the-shelf components
NASA Astrophysics Data System (ADS)
Luke, Robert H.
2016-05-01
The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
Searching Lost People with Uavs: the System and Results of the Close-Search Project
NASA Astrophysics Data System (ADS)
Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Skaloud, J.; Kornus, W.; Prades, R.; Aguilera, C.
2012-07-01
This paper will introduce the goals, concept and results of the project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost Search-And-Rescue (SAR) operations'. The main goal is to integrate a medium-size, helicopter-type Unmanned Aerial Vehicle (UAV), a thermal imaging sensor and an EGNOS-based multi-sensor navigation system, including an Autonomous Integrity Monitoring (AIM) capability, to support search operations in difficult-to-access areas and/or night operations. The focus of the paper is three-fold. Firstly, the operational and technical challenges of the proposed approach are discussed, such as ultra-safe multi-sensor navigation system, the use of combined thermal and optical vision (infrared plus visible) for person recognition and Beyond-Line-Of-Sight communications among others. Secondly, the implementation of the integrity concept for UAV platforms is discussed herein through the AIM approach. Based on the potential of the geodetic quality analysis and on the use of the European EGNOS system as a navigation performance starting point, AIM approaches integrity from the precision standpoint; that is, the derivation of Horizontal and Vertical Protection Levels (HPLs, VPLs) from a realistic precision estimation of the position parameters is performed and compared to predefined Alert Limits (ALs). Finally, some results from the project test campaigns are described to report on particular project achievements. Together with actual Search-and-Rescue teams, the system was operated in realistic, user-chosen test scenarios. In this context, and specially focusing on the EGNOS-based UAV navigation, the AIM capability and also the RGB/thermal imaging subsystem, a summary of the results is presented.
Experimental Semiautonomous Vehicle
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Mishkin, Andrew H.; Litwin, Todd E.; Matthies, Larry H.; Cooper, Brian K.; Nguyen, Tam T.; Gat, Erann; Gennery, Donald B.; Firby, Robert J.; Miller, David P.;
1993-01-01
Semiautonomous rover vehicle serves as testbed for evaluation of navigation and obstacle-avoidance techniques. Designed to traverse variety of terrains. Concepts developed applicable to robots for service in dangerous environments as well as to robots for exploration of remote planets. Called Robby, vehicle 4 m long and 2 m wide, with six 1-m-diameter wheels. Mass of 1,200 kg and surmounts obstacles as large as 1 1/2 m. Optimized for development of machine-vision-based strategies and equipped with complement of vision and direction sensors and image-processing computers. Front and rear cabs steer and roll with respect to centerline of vehicle. Vehicle also pivots about central axle, so wheels comply with almost any terrain.
Neural correlates of virtual route recognition in congenital blindness.
Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice
2010-07-13
Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-01-01
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775
NASA Astrophysics Data System (ADS)
Chow, J. C. K.
2017-09-01
In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions are required for the general motion of the sensor (e.g. static periods).
Real-time Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.
2005-01-01
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.
Real-time enhanced vision system
NASA Astrophysics Data System (ADS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.
2005-05-01
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.
Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)
NASA Astrophysics Data System (ADS)
Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy
2013-05-01
GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.
Machine Vision Applied to Navigation of Confined Spaces
NASA Technical Reports Server (NTRS)
Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.
2004-01-01
The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.
Open-Loop Flight Testing of COBALT GN&C Technologies for Precise Soft Landing
NASA Technical Reports Server (NTRS)
Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Restrepo, Carolina I.
2017-01-01
A terrestrial, open-loop (OL) flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed, with support through the NASA Advanced Exploration Systems (AES), Game Changing Development (GCD), and Flight Opportunities (FO) Programs. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuzes the NDL and LVS data in real time to produce a precise navigation solution that is independent of the Global Positioning System (GPS) and suitable for future, autonomous planetary landing systems. The OL campaign tested COBALT as a passive payload, with COBALT data collection and filter execution, but with the Xodiac vehicle Guidance and Control (G&C) loops closed on a Masten GPS-based navigation solution. The OL test was performed as a risk reduction activity in preparation for an upcoming 2017 closed-loop (CL) flight campaign in which Xodiac G&C will act on the COBALT navigation solution and the GPS-based navigation will serve only as a backup monitor.
[Personnel with poor vision at fighter pilot school].
Corbé, C; Menu, J P
1997-10-01
The piloting of fighting aircraft, the navigation of space-shuttle, the piloting of an helicopter in tactical flight at an altitude of 50 metres require the use of all sensorial, ocular, vestibular, proprioceptive ... sensors. So, the selection and the follow-up of these aerial engines' pilots need a very complete study of medical parameters, in particular sensorial and notably visual system. The doctors and the expert researchers in Aeronautical and spatial Medicine of the Army Health Department, which have in charge the medical supervision of flight crew, should study, create, and improve tests of visual sensorial exploration developed from fundamental and applied research. These authenticated tests with military pilots were applied in ophthalmology for the estimation of normal and deficient vision. A proposition to change norms of World Health Organisation applied to the vision has been following these to low visual persons was equally introduced.
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Learning for autonomous navigation : extrapolating from underfoot to the far field
NASA Technical Reports Server (NTRS)
Matthies, Larry; Turmon, Michael; Howard, Andrew; Angelova, Anelia; Tang, Benyang; Mjolsness, Eric
2005-01-01
Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3-D geometry and learning from proprioception, and describe initial instantiations of them we have developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain, learning to extend the lookahead range of the vision system, and learning how slip varies with slope.
2014-06-01
B. Beetle wing colors Whereas most insect wings are rather thin and flexible chitinous structures, in beetles this holds for only one wing pair...symbols). The black line is the dispersion curve for insect chitin . D. Insect photoreceptors Insect vision starts with the absorption of light by the...BD (2012) Sexual dichromatism of the damselfly Calopteryx japonica caused by a melanin- chitin multilayer in the male wing veins. PLoS ONE 7: e49743
Mobile robot exploration and navigation of indoor spaces using sonar and vision
NASA Technical Reports Server (NTRS)
Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth
1994-01-01
Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Low computation vision-based navigation for a Martian rover
NASA Technical Reports Server (NTRS)
Gavin, Andrew S.; Brooks, Rodney A.
1994-01-01
Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.
The role of vision for navigation in the crown-of-thorns seastar, Acanthaster planci
Sigl, Robert; Steibl, Sebastian; Laforsch, Christian
2016-01-01
Coral reefs all over the Indo-Pacific suffer from substantial damage caused by the crown-of-thorns seastar Acanthaster planci, a voracious predator that moves on and between reefs to seek out its coral prey. Chemoreception is thought to guide A. planci. As vision was recently introduced as another sense involved in seastar navigation, we investigated the potential role of vision for navigation in A. planci. We estimated the spatial resolution and visual field of the compound eye using histological sections and morphometric measurements. Field experiments in a semi-controlled environment revealed that vision in A. planci aids in finding reef structures at a distance of at least 5 m, whereas chemoreception seems to be effective only at very short distances. Hence, vision outweighs chemoreception at intermediate distances. A. planci might use vision to navigate between reef structures and to locate coral prey, therefore improving foraging efficiency, especially when multidirectional currents and omnipresent chemical cues on the reef hamper chemoreception. PMID:27476750
Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.
2015-01-01
The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766
Adaptive multisensor fusion for planetary exploration rovers
NASA Technical Reports Server (NTRS)
Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri
1992-01-01
The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.
SPARTAN: A High-Fidelity Simulation for Automated Rendezvous and Docking Applications
NASA Technical Reports Server (NTRS)
Turbe, Michael A.; McDuffie, James H.; DeKock, Brandon K.; Betts, Kevin M.; Carrington, Connie K.
2007-01-01
bd Systems (a subsidiary of SAIC) has developed the Simulation Package for Autonomous Rendezvous Test and ANalysis (SPARTAN), a high-fidelity on-orbit simulation featuring multiple six-degree-of-freedom (6DOF) vehicles. SPARTAN has been developed in a modular fashion in Matlab/Simulink to test next-generation automated rendezvous and docking guidance, navigation,and control algorithms for NASA's new Vision for Space Exploration. SPARTAN includes autonomous state-based mission manager algorithms responsible for sequencing the vehicle through various flight phases based on on-board sensor inputs and closed-loop guidance algorithms, including Lambert transfers, Clohessy-Wiltshire maneuvers, and glideslope approaches The guidance commands are implemented using an integrated translation and attitude control system to provide 6DOF control of each vehicle in the simulation. SPARTAN also includes high-fidelity representations of a variety of absolute and relative navigation sensors that maybe used for NASA missions, including radio frequency, lidar, and video-based rendezvous sensors. Proprietary navigation sensor fusion algorithms have been developed that allow the integration of these sensor measurements through an extended Kalman filter framework to create a single optimal estimate of the relative state of the vehicles. SPARTAN provides capability for Monte Carlo dispersion analysis, allowing for rigorous evaluation of the performance of the complete proposed AR&D system, including software, sensors, and mechanisms. SPARTAN also supports hardware-in-the-loop testing through conversion of the algorithms to C code using Real-Time Workshop in order to be hosted in a mission computer engineering development unit running an embedded real-time operating system. SPARTAN also contains both runtime TCP/IP socket interface and post-processing compatibility with bdStudio, a visualization tool developed by bd Systems, allowing for intuitive evaluation of simulation results. A description of the SPARTAN architecture and capabilities is provided, along with details on the models and algorithms utilized and results from representative missions.
Guidance, Navigation, and Control Technology Assessment for Future Planetary Science Missions
NASA Technical Reports Server (NTRS)
Beauchamp, Pat; Cutts, James; Quadrelli, Marco B.; Wood, Lincoln J.; Riedel, Joseph E.; McHenry, Mike; Aung, MiMi; Cangahuala, Laureano A.; Volpe, Rich
2013-01-01
Future planetary explorations envisioned by the National Research Council's (NRC's) report titled Vision and Voyages for Planetary Science in the Decade 2013-2022, developed for NASA Science Mission Directorate (SMD) Planetary Science Division (PSD), seek to reach targets of broad scientific interest across the solar system. This goal requires new capabilities such as innovative interplanetary trajectories, precision landing, operation in close proximity to targets, precision pointing, multiple collaborating spacecraft, multiple target tours, and advanced robotic surface exploration. Advancements in Guidance, Navigation, and Control (GN&C) and Mission Design in the areas of software, algorithm development and sensors will be necessary to accomplish these future missions. This paper summarizes the key GN&C and mission design capabilities and technologies needed for future missions pursuing SMD PSD's scientific goals.
New vision system and navigation algorithm for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.
2013-12-01
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
Obstacle Detection using Binocular Stereo Vision in Trajectory Planning for Quadcopter Navigation
NASA Astrophysics Data System (ADS)
Bugayong, Albert; Ramos, Manuel, Jr.
2018-02-01
Quadcopters are one of the most versatile unmanned aerial vehicles due to its vertical take-off and landing as well as hovering capabilities. This research uses the Sum of Absolute Differences (SAD) block matching algorithm for stereo vision. A complementary filter was used in sensor fusion to combine obtained quadcopter orientation data from the accelerometer and the gyroscope. PID control was implemented for the motor control and VFH+ algorithm was implemented for trajectory planning. Results show that the quadcopter was able to consistently actuate itself in the roll, yaw and z-axis during obstacle avoidance but was however found to be inconsistent in the pitch axis during forward and backward maneuvers due to the significant noise present in the pitch axis angle outputs compared to the roll and yaw axes.
46 CFR 72.04-1 - Navigation bridge visibility.
Code of Federal Regulations, 2011 CFR
2011-10-01
... meet the following requirements: (a) The field of vision from the navigation bridge, whether the vessel... degrees. (2) From the conning position, the horizontal field of vision extends over an arc from at least...) From each bridge wing, the field of vision extends over an arc from at least 45 degrees on the opposite...
46 CFR 190.02-1 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... September 7, 1990, must meet the following requirements: (a) The field of vision from the navigation bridge... not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends over an...)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at least...
46 CFR 72.04-1 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... meet the following requirements: (a) The field of vision from the navigation bridge, whether the vessel... degrees. (2) From the conning position, the horizontal field of vision extends over an arc from at least...) From each bridge wing, the field of vision extends over an arc from at least 45 degrees on the opposite...
46 CFR 108.801 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... September 7, 1990, must meet the following requirements: (a) The field of vision from the navigation bridge... not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends over an...)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at least...
Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.
Wang, Lin; Yang, Liancheng; Dagnelie, Gislin
2008-11-01
To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
Guidance, Navigation and Control Innovations at the NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Ericsson, Aprille Joy
2002-01-01
A viewgraph presentation on guidance navigation and control innovations at the NASA Goddard Space Flight Center is presented. The topics include: 1) NASA's vision; 2) NASA's Mission; 3) Earth Science Enterprise (ESE); 4) Guidance, Navigation and Control Division (GN&C); 5) Landsat-7 Earth Observer-1 Co-observing Program; and 6) NASA ESE Vision.
46 CFR 32.16-1 - Navigation bridge visibility-T/ALL.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., must meet the following requirements: (a) The field of vision from the navigation bridge, whether the... degrees. (2) From the conning position, the horizontal field of vision extends over an arc from at least...) From each bridge wing, the field of vision extends over an arc from at least 45 degrees on the opposite...
33 CFR 164.15 - Navigation bridge visibility.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... ports must be such that the field of vision from the navigation bridge conforms as closely as possible... horizontal field of vision must extend over an arc from at least 22.5 degrees abaft the beam on one side of... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...
33 CFR 164.15 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... ports must be such that the field of vision from the navigation bridge conforms as closely as possible... horizontal field of vision must extend over an arc from at least 22.5 degrees abaft the beam on one side of... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
Harland, S; Legge, G E; Luebker, A
1998-03-01
Most people with low vision need magnification to read. Page navigation is the process of moving a magnifier during reading. Modern electronic technology can provide many alternatives for navigating through text. This study compared reading speeds for four methods of displaying text. The four methods varied in their page-navigation demands. The closed-circuit television (CCTV) and MOUSE methods involved manual navigation. The DRIFT method (horizontally drifting text) involved no manual navigation, but did involve both smooth-pursuit and saccadic eye movements. The rapid serial visual presentation (RSVP) method involved no manual navigation, and relatively few eye movements. There were 7 normal subjects and 12 low-vision subjects (7 with central-field loss, CFL group, and 5 with central fields intact, CFI group). The subjects read 70-word passages at speeds that yielded good comprehension. Taking the CCTV reading speed as a benchmark, neither the normal nor low-vision subjects had significantly different speeds with the MOUSE method. As expected from the reduced navigational demands, normal subjects read faster with the DRIFT method (85% faster) and the RSVP method (169%). The CFI group read significantly faster with DRIFT (43%) and RSVP (38%). The CFL group showed no significant differences in reading speed for the four methods.
NASA Astrophysics Data System (ADS)
Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.
2014-08-01
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
Using neuromorphic optical sensors for spacecraft absolute and relative navigation
NASA Astrophysics Data System (ADS)
Shake, Christopher M.
We develop a novel attitude determination system (ADS) for use on nano spacecraft using neuromorphic optical sensors. The ADS intends to support nano-satellite operations by providing low-cost, low-mass, low-volume, low-power, and redundant attitude determination capabilities with quick and straightforward onboard programmability for real time spacecraft operations. The ADS is experimentally validated with commercial-off-the-shelf optical devices that perform sensing and image processing on the same circuit board and are biologically inspired by insects' vision systems, which measure optical flow while navigating in the environment. The firmware on the devices is modified to both perform the additional biologically inspired task of tracking objects and communicate with a PC/104 form-factor embedded computer running Real Time Application Interface Linux used on a spacecraft simulator. Algorithms are developed for operations using optical flow, point tracking, and hybrid modes with the sensors, and the performance of the system in all three modes is assessed using a spacecraft simulator in the Advanced Autonomous Multiple Spacecraft (ADAMUS) laboratory at Rensselaer. An existing relative state determination method is identified to be combined with the novel ADS to create a self-contained navigation system for nano spacecraft. The performance of the method is assessed in simulation and found not to match the results from its authors using only conditions and equations already published. An improved target inertia tensor method is proposed as an update to the existing relative state method, but found not to perform as expected, but is presented for others to build upon.
Computer vision techniques for rotorcraft low-altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Cheng, Victor H. L.
1988-01-01
A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.
Detection of Obstacles in Monocular Image Sequences
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia
1997-01-01
The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.
Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †
Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke
2015-01-01
This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420
Integrated polarization-dependent sensor for autonomous navigation
NASA Astrophysics Data System (ADS)
Liu, Ze; Zhang, Ran; Wang, Zhiwen; Guan, Le; Li, Bin; Chu, Jinkui
2015-01-01
Based on the navigation strategy of insects utilizing the polarized skylight, an integrated polarization-dependent sensor for autonomous navigation is presented. The navigation sensor has the features of compact structure, high precision, strong robustness, and a simple manufacture technique. The sensor is composed by integrating a complementary-metal-oxide-semiconductor sensor with a multiorientation nanowire grid polarizer. By nanoimprint lithography, the multiorientation nanowire polarizer is fabricated in one step and the alignment error is eliminated. The statistical theory is added to the interval-division algorithm to calculate the polarization angle of the incident light. The laboratory and outdoor tests for the navigation sensor are implemented and the errors of the measured angle are ±0.02 deg and ±1.3 deg, respectively. The results show that the proposed sensor has potential for application in autonomous navigation.
An Effective Terrain Aided Navigation for Low-Cost Autonomous Underwater Vehicles.
Zhou, Ling; Cheng, Xianghong; Zhu, Yixian; Dai, Chenxi; Fu, Jinbo
2017-03-25
Terrain-aided navigation is a potentially powerful solution for obtaining submerged position fixes for autonomous underwater vehicles. The application of terrain-aided navigation with high-accuracy inertial navigation systems has demonstrated meter-level navigation accuracy in sea trials. However, available sensors may be limited depending on the type of the mission. Such limitations, especially for low-grade navigation sensors, not only degrade the accuracy of traditional navigation systems, but further impact the ability to successfully employ terrain-aided navigation. To address this problem, a tightly-coupled navigation is presented to successfully estimate the critical sensor errors by incorporating raw sensor data directly into an augmented navigation system. Furthermore, three-dimensional distance errors are calculated, providing measurement updates through the particle filter for absolute and bounded position error. The development of the terrain aided navigation system is elaborated for a vehicle equipped with a non-inertial-grade strapdown inertial navigation system, a 4-beam Doppler Velocity Log range sensor and a sonar altimeter. Using experimental data for navigation performance evaluation in areas with different terrain characteristics, the experiment results further show that the proposed method can be successfully applied to the low-cost AUVs and significantly improves navigation performance.
An Effective Terrain Aided Navigation for Low-Cost Autonomous Underwater Vehicles
Zhou, Ling; Cheng, Xianghong; Zhu, Yixian; Dai, Chenxi; Fu, Jinbo
2017-01-01
Terrain-aided navigation is a potentially powerful solution for obtaining submerged position fixes for autonomous underwater vehicles. The application of terrain-aided navigation with high-accuracy inertial navigation systems has demonstrated meter-level navigation accuracy in sea trials. However, available sensors may be limited depending on the type of the mission. Such limitations, especially for low-grade navigation sensors, not only degrade the accuracy of traditional navigation systems, but further impact the ability to successfully employ terrain-aided navigation. To address this problem, a tightly-coupled navigation is presented to successfully estimate the critical sensor errors by incorporating raw sensor data directly into an augmented navigation system. Furthermore, three-dimensional distance errors are calculated, providing measurement updates through the particle filter for absolute and bounded position error. The development of the terrain aided navigation system is elaborated for a vehicle equipped with a non-inertial-grade strapdown inertial navigation system, a 4-beam Doppler Velocity Log range sensor and a sonar altimeter. Using experimental data for navigation performance evaluation in areas with different terrain characteristics, the experiment results further show that the proposed method can be successfully applied to the low-cost AUVs and significantly improves navigation performance. PMID:28346346
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights
NASA Technical Reports Server (NTRS)
Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas
2016-01-01
The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.
NASA Technical Reports Server (NTRS)
1970-01-01
The guidance and navigation requirements for unmanned missions to the outer planets, assuming constant, low thrust, ion propulsion are discussed. The navigational capability of the ground based Deep Space Network is compared to the improvements in navigational capability brought about by the addition of guidance and navigation related onboard sensors. Relevant onboard sensors include: (1) the optical onboard navigation sensor, (2) the attitude reference sensors, and (3) highly sensitive accelerometers. The totally ground based, and the combination ground based and onboard sensor systems are compared by means of the estimated errors in target planet ephemeris, and the spacecraft position with respect to the planet.
Unmanned Ground Vehicle Navigation and Coverage Hole Patching in Wireless Sensor Networks
ERIC Educational Resources Information Center
Zhang, Guyu
2013-01-01
This dissertation presents a study of an Unmanned Ground Vehicle (UGV) navigation and coverage hole patching in coordinate-free and localization-free Wireless Sensor Networks (WSNs). Navigation and coverage maintenance are related problems since coverage hole patching requires effective navigation in the sensor network environment. A…
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.
Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J
2014-08-25
The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.
Vision Based Obstacle Detection in Uav Imaging
NASA Astrophysics Data System (ADS)
Badrloo, S.; Varshosaz, M.
2017-08-01
Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.
Road following for blindBike: an assistive bike navigation system for low vision persons
NASA Astrophysics Data System (ADS)
Grewe, Lynne; Overell, William
2017-05-01
Road Following is a critical component of blindBike, our assistive biking application for the visually impaired. This paper talks about the overall blindBike system and goals prominently featuring Road Following, which is the task of directing the user to follow the right side of the road. This work unlike what is commonly found for self-driving cars does not depend on lane line markings. 2D computer vision techniques are explored to solve the problem of Road Following. Statistical techniques including the use of Gaussian Mixture Models are employed. blindBike is developed as an Android Application and is running on a smartphone device. Other sensors including Gyroscope and GPS are utilized. Both Urban and suburban scenarios are tested and results are given. The success and challenges faced by blindBike's Road Following module are presented along with future avenues of work.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Vision based object pose estimation for mobile robots
NASA Technical Reports Server (NTRS)
Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry
1994-01-01
Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu
2013-01-01
An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust. PMID:24250261
Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu
2013-01-01
An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.
NASA Technical Reports Server (NTRS)
Beyer, J.; Jacobus, C.; Mitchell, B.
1987-01-01
Range imagery from a laser scanner can be used to provide sufficient information for docking and obstacle avoidance procedures to be performed automatically. Three dimensional model-based computer vision algorithms in development can perform these tasks even with targets which may not be cooperative (that is, objects without special targets or markers to provide unambiguous location points). Roll, pitch and yaw of the vehicle can be taken into account as image scanning takes place, so that these can be corrected when the image is converted from egocentric to world coordinates. Other attributes of the sensor, such as the registered reflectence and texture channels, provide additional data sources for algorithm robustness. Temporal fusion of sensor immages can take place in the work coordinate domain, allowing for the building of complex maps in three dimensional space.
Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors
Everding, Lukas; Conradt, Jörg
2018-01-01
In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386
Navigating the Rural Terrain: Educators' Visions to Promote Change
ERIC Educational Resources Information Center
Vaughn, Margaret; Saul, Melissa
2013-01-01
Advocates of rural education emphasize the need to examine supports which may promote rural educators given the challenging contexts of which they face. Teacher visioning has been conceptualized as a navigational tool to help sustain and promote teachers given high-challenging contexts. The current study explored 10 public school teachers from…
Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction
Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.
2014-01-01
The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546
NASA Astrophysics Data System (ADS)
Turner, D.; Lucieer, A.; McCabe, M.; Parkes, S.; Clarke, I.
2017-08-01
In this study, we assess two push broom hyperspectral sensors as carried by small (10-15 kg) multi-rotor Unmanned Aircraft Systems (UAS). We used a Headwall Photonics micro-Hyperspec push broom sensor with 324 spectral bands (4-5 nm FWHM) and a Headwall Photonics nano-Hyperspec sensor with 270 spectral bands (6 nm FWHM) both in the VNIR spectral range (400-1000 nm). A gimbal was used to stabilise the sensors in relation to the aircraft flight dynamics, and for the micro-Hyperspec a tightly coupled dual frequency Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU), and Machine Vision Camera (MVC) were used for attitude and position determination. For the nano-Hyperspec, a navigation grade GNSS system and IMU provided position and attitude data. This study presents the geometric results of one flight over a grass oval on which a dense Ground Control Point (GCP) network was deployed. The aim being to ascertain the geometric accuracy achievable with the system. Using the PARGE software package (ReSe - Remote Sensing Applications) we ortho-rectify the push broom hyperspectral image strips and then quantify the accuracy of the ortho-rectification by using the GCPs as check points. The orientation (roll, pitch, and yaw) of the sensor is measured by the IMU. Alternatively imagery from a MVC running at 15 Hz, with accurate camera position data can be processed with Structure from Motion (SfM) software to obtain an estimated camera orientation. In this study, we look at which of these data sources will yield a flight strip with the highest geometric accuracy.
NASA Technical Reports Server (NTRS)
Strube, Matthew; Henry, Ross; Skeleton, Eugene; Eepoel, John Van; Gill, Nat; McKenna, Reed
2015-01-01
Since the last Hubble Servicing Mission five years ago, the Satellite Servicing Capabilities Office (SSCO) at the NASA Goddard Space Flight Center (GSFC) has been focusing on maturing the technologies necessary to robotically service orbiting legacy assets-spacecraft not necessarily designed for in-flight service. Raven, SSCO's next orbital experiment to the International Space Station (ISS), is a real-time autonomous non-cooperative relative navigation system that will mature the estimation algorithms required for rendezvous and proximity operations for a satellite-servicing mission. Raven will fly as a hosted payload as part of the Space Test Program's STP-H5 mission, which will be mounted on an external ExPRESS Logistics Carrier (ELC) and will image the many visiting vehicles arriving and departing from the ISS as targets for observation. Raven will host multiple sensors: a visible camera with a variable field of view lens, a long-wave infrared camera, and a short-wave flash lidar. This sensor suite can be pointed via a two-axis gimbal to provide a wide field of regard to track the visiting vehicles as they make their approach. Various real-time vision processing algorithms will produce range, bearing, and six degree of freedom pose measurements that will be processed in a relative navigation filter to produce an optimal relative state estimate. In this overview paper, we will cover top-level requirements, experimental concept of operations, system design, and the status of Raven integration and test activities.
Machine-Vision Aids for Improved Flight Operations
NASA Technical Reports Server (NTRS)
Menon, P. K.; Chatterji, Gano B.
1996-01-01
The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.
Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao
2014-09-15
Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice.
Acoustic Sensors for Air and Surface Navigation Applications
Kapoor, Rohan; Ramasamy, Subramanian; Schyndel, Ron Van
2018-01-01
This paper presents the state-of-the-art and reviews the state-of-research of acoustic sensors used for a variety of navigation and guidance applications on air and surface vehicles. In particular, this paper focuses on echolocation, which is widely utilized in nature by certain mammals (e.g., cetaceans and bats). Although acoustic sensors have been extensively adopted in various engineering applications, their use in navigation and guidance systems is yet to be fully exploited. This technology has clear potential for applications in air and surface navigation/guidance for intelligent transport systems (ITS), especially considering air and surface operations indoors and in other environments where satellite positioning is not available. Propagation of sound in the atmosphere is discussed in detail, with all potential attenuation sources taken into account. The errors introduced in echolocation measurements due to Doppler, multipath and atmospheric effects are discussed, and an uncertainty analysis method is presented for ranging error budget prediction in acoustic navigation applications. Considering the design challenges associated with monostatic and multi-static sensor implementations and looking at the performance predictions for different possible configurations, acoustic sensors show clear promises in navigation, proximity sensing, as well as obstacle detection and tracking. The integration of acoustic sensors in multi-sensor navigation systems is also considered towards the end of the paper and a low Size, Weight and Power, and Cost (SWaP-C) sensor integration architecture is presented for possible introduction in air and surface navigation systems. PMID:29414894
Indoor integrated navigation and synchronous data acquisition method for Android smartphone
NASA Astrophysics Data System (ADS)
Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng
2015-08-01
Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-01-01
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-09-10
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.
Vision-based navigation in a dynamic environment for virtual human
NASA Astrophysics Data System (ADS)
Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu
2004-06-01
Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2006-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results showed the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
On Navigation Sensor Error Correction
NASA Astrophysics Data System (ADS)
Larin, V. B.
2016-01-01
The navigation problem for the simplest wheeled robotic vehicle is solved by just measuring kinematical parameters, doing without accelerometers and angular-rate sensors. It is supposed that the steerable-wheel angle sensor has a bias that must be corrected. The navigation parameters are corrected using the GPS. The approach proposed regards the wheeled robot as a system with nonholonomic constraints. The performance of such a navigation system is demonstrated by way of an example
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-05-23
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
46 CFR 32.16-1 - Navigation bridge visibility-T/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Navigation bridge visibility-T/ALL. 32.16-1 Section 32..., AND HULL REQUIREMENTS Navigation Bridge Visibility § 32.16-1 Navigation bridge visibility-T/ALL. Each..., must meet the following requirements: (a) The field of vision from the navigation bridge, whether the...
A Bionic Camera-Based Polarization Navigation Sensor
Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai
2014-01-01
Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only themore » minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.« less
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao
2014-01-01
Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice. PMID:25225872
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
Proulx, Michael J.; Gwinnutt, James; Dell’Erba, Sara; Levy-Tzedek, Shelly; de Sousa, Alexandra A.; Brown, David J.
2015-01-01
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action. PMID:26599473
Real-time Terrain Relative Navigation Test Results from a Relevant Environment for Mars Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Cheng, Yang; Montgomery, James; Trawny, Nikolas; Tweddle, Brent; Zheng, Jason
2015-01-01
Terrain Relative Navigation (TRN) is an on-board GN&C function that generates a position estimate of a spacecraft relative to a map of a planetary surface. When coupled with a divert, the position estimate enables access to more challenging landing sites through pin-point landing or large hazard avoidance. The Lander Vision System (LVS) is a smart sensor system that performs terrain relative navigation by matching descent camera imagery to a map of the landing site and then fusing this with inertial measurements to obtain high rate map relative position, velocity and attitude estimates. A prototype of the LVS was recently tested in a helicopter field test over Mars analog terrain at altitudes representative of Mars Entry Descent and Landing conditions. TRN ran in real-time on the LVS during the flights without human intervention or tuning. The system was able to compute estimates accurate to 40m (3 sigma) in 10 seconds on a flight like processing system. This paper describes the Mars operational test space definition, how the field test was designed to cover that operational envelope, the resulting TRN performance across the envelope and an assessment of test space coverage.
Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones.
Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan
2015-07-20
This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications.
Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones
Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan
2015-01-01
This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications. PMID:26205269
2012-08-01
ACTIVE SAFETY TECHNOLOGY – ENVIRONMENTAL UNDERSTANDING AND NAVIGATION WITH USE OF LOW COST SENSORS David Simon Lockheed Martin MFC, Grand Prairie, TX...Understanding and Navigation with use of low cost sensors 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) David Simon ; Bernard
Design and testing of a multi-sensor pedestrian location and navigation platform.
Morrison, Aiden; Renaudin, Valérie; Bancroft, Jared B; Lachapelle, Gérard
2012-01-01
Navigation and location technologies are continually advancing, allowing ever higher accuracies and operation under ever more challenging conditions. The development of such technologies requires the rapid evaluation of a large number of sensors and related utilization strategies. The integration of Global Navigation Satellite Systems (GNSSs) such as the Global Positioning System (GPS) with accelerometers, gyros, barometers, magnetometers and other sensors is allowing for novel applications, but is hindered by the difficulties to test and compare integrated solutions using multiple sensor sets. In order to achieve compatibility and flexibility in terms of multiple sensors, an advanced adaptable platform is required. This paper describes the design and testing of the NavCube, a multi-sensor navigation, location and timing platform. The system provides a research tool for pedestrian navigation, location and body motion analysis in an unobtrusive form factor that enables in situ data collections with minimal gait and posture impact. Testing and examples of applications of the NavCube are provided.
2006-07-27
unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry
Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory
NASA Technical Reports Server (NTRS)
Mitchell, J.; Johnston, A.; Howard, R.; Williamson, M.; Brewster, L.; Strack, D.; Cryan, S.
2007-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as Automated Rendezvous and Docking, AR&D). The crewed versions may also perform AR&D, possibly with a different level of automation and/or autonomy, and must also provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the CEV requirements. The relatively low technology readiness of relative navigation sensors for AR&D has been carried as one of the CEV Projects top risks. The AR&D Sensor Technology Project seeks to reduce this risk by increasing technology maturation of selected relative navigation sensor technologies through testing and simulation, and to allow the CEV Project to assess the relative navigation sensors.
Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Williamson, Marlin L.; Johnston, Albert S.; Brewster, Linda L.; Mitchell, Jennifer D.; Cryan, Scott P.; Strack, David; Key, Kevin
2007-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as Automated Rendezvous and Docking, (AR&D).) The crewed versions of the spacecraft may also perform AR&D, possibly with a different level of automation and/or autonomy, and must also provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the CEV requirements. The relatively low technology readiness of relative navigation sensors for AR&D has been carried as one of the CEV Projects top risks. The AR&D Sensor Technology Project seeks to reduce this risk by increasing technology maturation of selected relative navigation sensor technologies through testing and simulation, and to allow the CEV Project to assess the relative navigation sensors.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
Testing and evaluation of a wearable augmented reality system for natural outdoor environments
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg
2013-05-01
This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.
Vision-Based Georeferencing of GPR in Urban Areas
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; Pagliari, Diana; Pinto, Livio
2016-01-01
Ground Penetrating Radar (GPR) surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS) device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based) method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles. PMID:26805842
Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.
Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe
2017-09-01
Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Development of a GPS/INS/MAG navigation system and waypoint navigator for a VTOL UAV
NASA Astrophysics Data System (ADS)
Meister, Oliver; Mönikes, Ralf; Wendel, Jan; Frietsch, Natalie; Schlaile, Christian; Trommer, Gert F.
2007-04-01
Unmanned aerial vehicles (UAV) can be used for versatile surveillance and reconnaissance missions. If a UAV is capable of flying automatically on a predefined path the range of possible applications is widened significantly. This paper addresses the development of the integrated GPS/INS/MAG navigation system and a waypoint navigator for a small vertical take-off and landing (VTOL) unmanned four-rotor helicopter with a take-off weight below 1 kg. The core of the navigation system consists of low cost inertial sensors which are continuously aided with GPS, magnetometer compass, and a barometric height information. Due to the fact, that the yaw angle becomes unobservable during hovering flight, the integration with a magnetic compass is mandatory. This integration must be robust with respect to errors caused by the terrestrial magnetic field deviation and interferences from surrounding electronic devices as well as ferrite metals. The described integration concept with a Kalman filter overcomes the problem that erroneous magnetic measurements yield to an attitude error in the roll and pitch axis. The algorithm provides long-term stable navigation information even during GPS outages which is mandatory for the flight control of the UAV. In the second part of the paper the guidance algorithms are discussed in detail. These algorithms allow the UAV to operate in a semi-autonomous mode position hold as well an complete autonomous waypoint mode. In the position hold mode the helicopter maintains its position regardless of wind disturbances which ease the pilot job during hold-and-stare missions. The autonomous waypoint navigator enable the flight outside the range of vision and beyond the range of the radio link. Flight test results of the implemented modes of operation are shown.
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle
Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou
2012-01-01
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.
NASA Astrophysics Data System (ADS)
Uijt de Haag, Maarten; Venable, Kyle; Bezawada, Rajesh; Adami, Tony; Vadlamani, Ananth K.
2009-05-01
This paper discusses a sensor simulator/synthesizer framework that can be used to test and evaluate various sensor integration strategies for the implementation of an External Hazard Monitor (EHM) and Integrated Alerting and Notification (IAN) function as part of NASA's Integrated Intelligent Flight Deck (IIFD) project. The IIFD project under the NASA's Aviation Safety program "pursues technologies related to the flight deck that ensure crew workload and situational awareness are both safely optimized and adapted to the future operational environment as envisioned by NextGen." Within the simulation framework, various inputs to the IIFD and its subsystems, the EHM and IAN, are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. Sensors and avionics included in this framework are TCAS, ADS-B, Forward-Looking Infrared, Vision cameras, GPS, Inertial navigators, EGPWS, Laser Detection and Ranging sensors, altimeters, communication links with ATC, and weather radar. The framework is implemented in Simulink, a modeling language developed by The Mathworks. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft. Specifically, this paper addresses the architecture of the simulator, the sensor model interfaces, the timing and database (environment) aspects of the sensor models, the user interface of the modeling environment, and the various avionics implementations.
UGV navigation in wireless sensor and actuator network environments
NASA Astrophysics Data System (ADS)
Zhang, Guyu; Li, Jianfeng; Duncan, Christian A.; Kanno, Jinko; Selmic, Rastko R.
2012-06-01
We consider a navigation problem in a distributed, self-organized and coordinate-free Wireless Sensor and Ac- tuator Network (WSAN). We rst present navigation algorithms that are veried using simulation results. Con- sidering more than one destination and multiple mobile Unmanned Ground Vehicles (UGVs), we introduce a distributed solution to the Multi-UGV, Multi-Destination navigation problem. The objective of the solution to this problem is to eciently allocate UGVs to dierent destinations and carry out navigation in the network en- vironment that minimizes total travel distance. The main contribution of this paper is to develop a solution that does not attempt to localize either the UGVs or the sensor and actuator nodes. Other than some connectivity as- sumptions about the communication graph, we consider that no prior information about the WSAN is available. The solution presented here is distributed, and the UGV navigation is solely based on feedback from neigh- boring sensor and actuator nodes. One special case discussed in the paper, the Single-UGV, Multi-Destination navigation problem, is essentially equivalent to the well-known and dicult Traveling Salesman Problem (TSP). Simulation results are presented that illustrate the navigation distance traveled through the network. We also introduce an experimental testbed for the realization of coordinate-free and localization-free UGV navigation. We use the Cricket platform as the sensor and actuator network and a Pioneer 3-DX robot as the UGV. The experiments illustrate the UGV navigation in a coordinate-free WSAN environment where the UGV successfully arrives at the assigned destinations.
NASA Astrophysics Data System (ADS)
Katake, Anup; Choi, Heeyoul
2010-01-01
To enable autonomous air-to-refueling of manned and unmanned vehicles a robust high speed relative navigation sensor capable of proving high accuracy 3DOF information in diverse operating conditions is required. To help address this problem, StarVision Technologies Inc. has been developing a compact, high update rate (100Hz), wide field-of-view (90deg) direction and range estimation imaging sensor called VisNAV 100. The sensor is fully autonomous requiring no communication from the tanker aircraft and contains high reliability embedded avionics to provide range, azimuth, elevation (3 degrees of freedom solution 3DOF) and closing speed relative to the tanker aircraft. The sensor is capable of providing 3DOF with an error of 1% in range and 0.1deg in azimuth/elevation up to a range of 30m and 1 deg error in direction for ranges up to 200m at 100Hz update rates. In this paper we will discuss the algorithms that were developed in-house to enable robust beacon pattern detection, outlier rejection and 3DOF estimation in adverse conditions and present the results of several outdoor tests. Results from the long range single beacon detection tests will also be discussed.
Relative navigation requirements for automatic rendezvous and capture systems
NASA Technical Reports Server (NTRS)
Kachmar, Peter M.; Polutchko, Robert J.; Chu, William; Montez, Moises
1991-01-01
This paper will discuss in detail the relative navigation system requirements and sensor trade-offs for Automatic Rendezvous and Capture. Rendezvous navigation filter development will be discussed in the context of navigation performance requirements for a 'Phase One' AR&C system capability. Navigation system architectures and the resulting relative navigation performance for both cooperative and uncooperative target vehicles will be assessed. Relative navigation performance using rendezvous radar, star tracker, radiometric, laser and GPS navigation sensors during appropriate phases of the trajectory will be presented. The effect of relative navigation performance on the Integrated AR&C system performance will be addressed. Linear covariance and deterministic simulation results will be used. Evaluation of relative navigation and IGN&C system performance for several representative relative approach profiles will be presented in order to demonstrate the full range of system capabilities. A summary of the sensor requirements and recommendations for AR&C system capabilities for several programs requiring AR&C will be presented.
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
Cloud Absorption Radiometer Autonomous Navigation System - CANS
NASA Technical Reports Server (NTRS)
Kahle, Duncan; Gatebe, Charles; McCune, Bill; Hellwig, Dustan
2013-01-01
CAR (cloud absorption radiometer) acquires spatial reference data from host aircraft navigation systems. This poses various problems during CAR data reduction, including navigation data format, accuracy of position data, accuracy of airframe inertial data, and navigation data rate. Incorporating its own navigation system, which included GPS (Global Positioning System), roll axis inertia and rates, and three axis acceleration, CANS expedites data reduction and increases the accuracy of the CAR end data product. CANS provides a self-contained navigation system for the CAR, using inertial reference and GPS positional information. The intent of the software application was to correct the sensor with respect to aircraft roll in real time based upon inputs from a precision navigation sensor. In addition, the navigation information (including GPS position), attitude data, and sensor position details are all streamed to a remote system for recording and later analysis. CANS comprises a commercially available inertial navigation system with integral GPS capability (Attitude Heading Reference System AHRS) integrated into the CAR support structure and data system. The unit is attached to the bottom of the tripod support structure. The related GPS antenna is located on the P-3 radome immediately above the CAR. The AHRS unit provides a RS-232 data stream containing global position and inertial attitude and velocity data to the CAR, which is recorded concurrently with the CAR data. This independence from aircraft navigation input provides for position and inertial state data that accounts for very small changes in aircraft attitude and position, sensed at the CAR location as opposed to aircraft state sensors typically installed close to the aircraft center of gravity. More accurate positional data enables quicker CAR data reduction with better resolution. The CANS software operates in two modes: initialization/calibration and operational. In the initialization/calibration mode, the software aligns the precision navigation sensors and initializes the communications interfaces with the sensor and the remote computing system. It also monitors the navigation data state for quality and ensures that the system maintains the required fidelity for attitude and positional information. In the operational mode, the software runs at 12.5 Hz and gathers the required navigation/attitude data, computes the required sensor correction values, and then commands the sensor to the required roll correction. In this manner, the sensor will stay very near to vertical at all times, greatly improving the resulting collected data and imagery. CANS greatly improves quality of resulting imagery and data collected. In addition, the software component of the system outputs a concisely formatted, high-speed data stream that can be used for further science data processing. This precision, time-stamped data also can benefit other instruments on the same aircraft platform by providing extra information from the mission flight.
Combining path integration and remembered landmarks when navigating without vision.
Kalia, Amy A; Schrater, Paul R; Legge, Gordon E
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Combining Path Integration and Remembered Landmarks When Navigating without Vision
Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Brian E.; Oppel III, Fred J.
2017-01-25
This package contains modules that model a visual sensor in Umbra. It is typically used to represent eyesight of characters in Umbra. This library also includes the sensor property, seeable, and an Active Denial sensor.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
An Agent-Based Model for Navigation Simulation in a Heterogeneous Environment
ERIC Educational Resources Information Center
Shanklin, Teresa A.
2012-01-01
Complex navigation (e.g. indoor and outdoor environments) can be studied as a system-of-systems problem. The model is made up of disparate systems that can aid a user in navigating from one location to another, utilizing whatever sensor system or information is available. By using intelligent navigation sensors and techniques (e.g. RFID, Wifi,…
NASA Astrophysics Data System (ADS)
Vinande, Eric T.
This research proposes several means to overcome challenges in the urban environment to ground vehicle global positioning system (GPS) receiver navigation performance through the integration of external sensor information. The effects of narrowband radio frequency interference and signal attenuation, both common in the urban environment, are examined with respect to receiver signal tracking processes. Low-cost microelectromechanical systems (MEMS) inertial sensors, suitable for the consumer market, are the focus of receiver augmentation as they provide an independent measure of motion and are independent of vehicle systems. A method for estimating the mounting angles of an inertial sensor cluster utilizing typical urban driving maneuvers is developed and is able to provide angular measurements within two degrees of truth. The integration of GPS and MEMS inertial sensors is developed utilizing a full state navigation filter. Appropriate statistical methods are developed to evaluate the urban environment navigation improvement due to the addition of MEMS inertial sensors. A receiver evaluation metric that combines accuracy, availability, and maximum error measurements is presented and evaluated over several drive tests. Following a description of proper drive test techniques, record and playback systems are evaluated as the optimal way of testing multiple receivers and/or integrated navigation systems in the urban environment as they simplify vehicle testing requirements.
An Automated Method for Navigation Assessment for Earth Survey Sensors Using Island Targets
NASA Technical Reports Server (NTRS)
Patt, F. S.; Woodward, R. H.; Gregg, W. W.
1997-01-01
An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalogue of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean colour sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.
Automated navigation assessment for earth survey sensors using island targets
NASA Technical Reports Server (NTRS)
Patt, Frederick S.; Woodward, Robert H.; Gregg, Watson W.
1997-01-01
An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalog of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean color sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.
ERIC Educational Resources Information Center
Doty, Keith L.
1999-01-01
Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)
Inertial navigation sensor integrated obstacle detection system
NASA Technical Reports Server (NTRS)
Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)
1992-01-01
A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.
Effects of Optical Artifacts in a Laser-Based Spacecraft Navigation Sensor
NASA Technical Reports Server (NTRS)
LeCroy, Jerry E.; Howard, Richard T.; Hallmark, Dean S.
2007-01-01
Testing of the Advanced Video Guidance Sensor (AVGS) used for proximity operations navigation on the Orbital Express ASTRO spacecraft exposed several unanticipated imaging system artifacts and aberrations that required correction to meet critical navigation performance requirements. Mitigation actions are described for a number of system error sources, including lens aberration, optical train misalignment, laser speckle, target image defects, and detector nonlinearity/noise characteristics. Sensor test requirements and protocols are described, along with a summary of test results from sensor confidence tests and system performance testing.
Effects of Optical Artifacts in a Laser-Based Spacecraft Navigation Sensor
NASA Technical Reports Server (NTRS)
LeCroy, Jerry E.; Hallmark, Dean S.; Howard, Richard T.
2007-01-01
Testing Of the Advanced Video Guidance Sensor (AVGS) used for proximity operations navigation on the Orbital Express ASTRO spacecraft exposed several unanticipated imaging system artifacts and aberrations that required correction, to meet critical navigation performance requirements. Mitigation actions are described for a number of system error sources, including lens aberration, optical train misalignment, laser speckle, target image defects, and detector nonlinearity/noise characteristics. Sensor test requirements and protocols are described, along with a summary ,of test results from sensor confidence tests and system performance testing.
1999-08-01
Electro - Optic Sensor Integration Technology (NEOSIT) software application. The design is highly modular and based on COTS tools to facilitate integration with sensors, navigation and digital data sources already installed on different host
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
Covariance Analysis of Vision Aided Navigation by Bootstrapping
2012-03-22
vision aided navigation. The aircraft uses its INS estimate to geolocate ground features, track those features to aid the INS, and using that aided...development of the 2-D case, including the dynamics and measurement model development, the state space representation and the use of the Kalman filter ...reference frame. This reference frame has its origin located somewhere on an A/C. Normally the origin is set at the A/C center of gravity to allow the use
Acoustic Communications and Navigation for Mobile Under-Ice Sensors
2017-02-04
From- To) 04/02/2017 Final Report 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Acoustic Communications and Navigation for Mobile Under-Ice Sensors...development and fielding of a new acoustic communications and navigation system for use on autonomous platforms (gliders and profiling floats) under the...contact below the ice. 15. SUBJECT TERMS Arctic Ocean, Undersea Workstations & Vehicles, Signal Processing, Navigation, Underwater Acoustics 16
NA-241_Quarterly Report_SBLibby - 12.31.2017_v2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libby, Stephen B.
This is an evaluation of candidate navigation solutions for GPS free inspection tools that can be used in tours of large building interiors. In principle, COTS portable inertial motion unit (IMU) sensors with satisfactory accuracy, SWAP (size, weight, power), low error, and bias drift can provide sufficiently accurate dead reckoning navigation in a large building in the absence of GPS. To explore this assumption, the capabilities of representative IMU navigation sensors to meet these requirements will be evaluated, starting with a market survey, and then carrying out a basic analysis of these sensors using LLNL’s navigation codes.
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
Bio-inspired polarized skylight navigation: a review
NASA Astrophysics Data System (ADS)
Zhang, Xi; Wan, Yongqin; Li, Lijing
2015-12-01
The idea of using skylight polarization in navigation is learned from animals such as desert ants and honeybees. Various research groups have been working on the development of novel navigation systems inspired by polarized skylight. The research of background in polarized skylight navigation is introduced, and basic principle of the insects navigation is expatiated. Then, the research progress status at home and abroad in skylight polarization pattern, three bio-inspired polarized skylight navigation sensors and polarized skylight navigation are reviewed. Finally, the research focuses in the field of polarized skylight navigation are analyzed. At the same time, the trend of development and prospect in the future are predicted. It is believed that the review is helpful to people understand polarized skylight navigation and polarized skylight navigation sensors.
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
Immune systems are not just for making you feel better: they are for controlling autonomous robots
NASA Astrophysics Data System (ADS)
Rosenblum, Mark
2005-05-01
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
Observability-Based Guidance and Sensor Placement
NASA Astrophysics Data System (ADS)
Hinson, Brian T.
Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
Autonomous satellite navigation using starlight refraction angle measurements
NASA Astrophysics Data System (ADS)
Ning, Xiaolin; Wang, Longhua; Bai, Xinbei; Fang, Jiancheng
2013-05-01
An on-board autonomous navigation capability is required to reduce the operation costs and enhance the navigation performance of future satellites. Autonomous navigation by stellar refraction is a type of autonomous celestial navigation method that uses high-accuracy star sensors instead of Earth sensors to provide information regarding Earth's horizon. In previous studies, the refraction apparent height has typically been used for such navigation. However, the apparent height cannot be measured directly by a star sensor and can only be calculated by the refraction angle and an atmospheric refraction model. Therefore, additional errors are introduced by the uncertainty and nonlinearity of atmospheric refraction models, which result in reduced navigation accuracy and reliability. A new navigation method based on the direct measurement of the refraction angle is proposed to solve this problem. Techniques for the determination of the refraction angle are introduced, and a measurement model for the refraction angle is established. The method is tested and validated by simulations. When the starlight refraction height ranges from 20 to 50 km, a positioning accuracy of better than 100 m can be achieved for a low-Earth-orbit (LEO) satellite using the refraction angle, while the positioning accuracy of the traditional method using the apparent height is worse than 500 m under the same conditions. Furthermore, an analysis of the factors that affect navigation accuracy, including the measurement accuracy of the refraction angle, the number of visible refracted stars per orbit and the installation azimuth of star sensor, is presented. This method is highly recommended for small satellites in particular, as no additional hardware besides two star sensors is required.
Biomimetic MEMS sensor array for navigation and water detection
NASA Astrophysics Data System (ADS)
Futterknecht, Oliver; Macqueen, Mark O.; Karman, Salmah; Diah, S. Zaleha M.; Gebeshuber, Ille C.
2013-05-01
The focus of this study is biomimetic concept development for a MEMS sensor array for navigation and water detection. The MEMS sensor array is inspired by abstractions of the respective biological functions: polarized skylight-based navigation sensors in honeybees (Apis mellifera) and the ability of African elephants (Loxodonta africana) to detect water. The focus lies on how to navigate to and how to detect water sources in desert-like or remote areas. The goal is to develop a sensor that can provide both, navigation clues and help in detecting nearby water sources. We basically use the information provided by the natural polarization pattern produced by the sunbeams scattered within the atmosphere combined with the capability of the honeybee's compound eye to extrapolate the navigation information. The detection device uses light beam reactive MEMS, which are capable to detect the skylight polarization based on the Rayleigh sky model. For water detection we present various possible approaches to realize the sensor. In the first approach, polarization is used: moisture saturated areas near ground have a small but distinctively different effect on scattering and polarizing light than less moist ones. Modified skylight polarization sensors (Karman, Diah and Gebeshuber, 2012) are used to visualize this small change in scattering. The second approach is inspired by the ability of elephants to detect infrasound produced by underground water reservoirs, and shall be used to determine the location of underground rivers and visualize their exact routes.
Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor
NASA Astrophysics Data System (ADS)
Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu
In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.
Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation
Yang, Kailun; Wang, Kaiwei; Romera, Eduardo; Hu, Weijian; Sun, Dongming; Sun, Junwei; Cheng, Ruiqi; Chen, Tianxue; López, Elena
2018-01-01
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. PMID:29748508
3D Reconfigurable MPSoC for Unmanned Spacecraft Navigation
NASA Astrophysics Data System (ADS)
Dekoulis, George
2016-07-01
This paper describes the design of a new lightweight spacecraft navigation system for unmanned space missions. The system addresses the demands for more efficient autonomous navigation in the near-Earth environment or deep space. The proposed instrumentation is directly suitable for unmanned systems operation and testing of new airborne prototypes for remote sensing applications. The system features a new sensor technology and significant improvements over existing solutions. Fluxgate type sensors have been traditionally used in unmanned defense systems such as target drones, guided missiles, rockets and satellites, however, the guidance sensors' configurations exhibit lower specifications than the presented solution. The current implementation is based on a recently developed material in a reengineered optimum sensor configuration for unprecedented low-power consumption. The new sensor's performance characteristics qualify it for spacecraft navigation applications. A major advantage of the system is the efficiency in redundancy reduction achieved in terms of both hardware and software requirements.
FPGA-based real-time embedded system for RISS/GPS integrated navigation.
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.
Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation
Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong
2015-01-01
In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212
FPGA-Based Real-Time Embedded System for RISS/GPS Integrated Navigation
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm. PMID:22368460
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
Precision of computer-assisted core decompression drilling of the knee.
Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L
2006-06-01
Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.
GPS free navigation inspired by insects through monocular camera and inertial sensors
NASA Astrophysics Data System (ADS)
Liu, Yi; Liu, J. G.; Cao, H.; Huang, Y.
2015-12-01
Navigation without GPS and other knowledge of environment have been studied for many decades. Advance technology have made sensors more compact and subtle that can be easily integrated into micro and hand-hold device. Recently researchers found that bee and fruit fly have an effectively and efficiently navigation mechanism through optical flow information and process only with their miniature brain. We present a navigation system inspired by the study of insects through a calibrated camera and other inertial sensors. The system utilizes SLAM theory and can be worked in many GPS denied environment. Simulation and experimental results are presented for validation and quantification.
Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System.
Hinas, Ajmal; Roberts, Jonathan M; Gonzalez, Felipe
2017-12-17
In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery of payloads or spot spraying applications in site-specific crop management. A downward-looking camera attached to a multirotor is used to find the target on the ground. The UAV descends to the target and hovers above the target for a few seconds to inspect the target. A high-level decision algorithm based on an OODA (observe, orient, decide, and act) loop was developed as a solution to address the problem. Navigation of the UAV was achieved by continuously sending local position messages to the autopilot via Mavros. The proposed system performed hovering above the target in three different stages: locate, descend, and hover. The system was tested in multiple trials, in simulations and outdoor tests, from heights of 10 m to 40 m. Results show that the system is highly reliable and robust to sensor errors, drift, and external disturbance.
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
Multispectral image-fused head-tracked vision system (HTVS) for driving applications
NASA Astrophysics Data System (ADS)
Reese, Colin E.; Bender, Edward J.
2001-08-01
Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.
46 CFR 92.03-1 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... after September 7, 1990, must meet the following requirements: (a) The field of vision from the... obstruction must not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends... paragraph (a)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at...
Maintaining a Cognitive Map in Darkness: The Need to Fuse Boundary Knowledge with Path Integration
Cheung, Allen; Ball, David; Milford, Michael; Wyeth, Gordon; Wiles, Janet
2012-01-01
Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's “cognitive map”, or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and – we conjecture – necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments. PMID:22916006
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.
Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F
2016-09-16
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.
Flash LIDAR Systems for Planetary Exploration
NASA Astrophysics Data System (ADS)
Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.
2009-01-01
Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.
Vision-Aided Inertial Navigation
NASA Technical Reports Server (NTRS)
Roumeliotis, Stergios I. (Inventor); Mourikis, Anastasios I. (Inventor)
2017-01-01
This document discloses, among other things, a system and method for implementing an algorithm to determine pose, velocity, acceleration or other navigation information using feature tracking data. The algorithm has computational complexity that is linear with the number of features tracked.
Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions
NASA Technical Reports Server (NTRS)
DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.
2008-01-01
bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).
Novel compact panomorph lens based vision system for monitoring around a vehicle
NASA Astrophysics Data System (ADS)
Thibault, Simon
2008-04-01
Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.
Vision-based sensing for autonomous in-flight refueling
NASA Astrophysics Data System (ADS)
Scott, D.; Toal, M.; Dale, J.
2007-04-01
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
Navy Applications of High-Frequency Acoustics
NASA Astrophysics Data System (ADS)
Cox, Henry
2004-11-01
Although the emphasis in underwater acoustics for the last few decades has been in low-frequency acoustics, motivated by long range detection of submarines, there has been a continuing use of high-frequency acoustics in traditional specialized applications such as bottom mapping, mine hunting, torpedo homing and under ice navigation. The attractive characteristics of high-frequency sonar, high spatial resolution, wide bandwidth, small size and relatively low cost must be balanced against the severe range limitation imposed by attenuation that increases approximately as frequency-squared. Many commercial applications of acoustics are ideally served by high-frequency active systems. The small size and low cost, coupled with the revolution in small powerful signal processing hardware has led to the consideration of more sophisticated systems. Driven by commercial applications, there are currently available several commercial-off-the-shelf products including acoustic modems for underwater communication, multi-beam fathometers, side scan sonars for bottom mapping, and even synthetic aperture side scan sonar. Much of the work in high frequency sonar today continues to be focused on specialized applications in which the application is emphasized over the underlying acoustics. Today's vision for the Navy of the future involves Autonomous Undersea Vehicles (AUVs) and off-board ASW sensors. High-frequency acoustics will play a central role in the fulfillment of this vision as a means of communication and as a sensor. The acoustic communication problems for moving AUVs and deep sensors are discussed. Explicit relationships are derived between the communication theoretic description of channel parameters in terms of time and Doppler spreads and ocean acoustic parameters, group velocities, phase velocities and horizontal wavenumbers. Finally the application of synthetic aperture sonar to the mine hunting problems is described.
Computing Optic Flow with ArduEye Vision Sensor
2013-01-01
processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the
A remote assessment system with a vision robot and wearable sensors.
Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun
2004-01-01
This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.
Distant touch hydrodynamic imaging with an artificial lateral line.
Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang
2006-12-12
Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.
VLC-based indoor location awareness using LED light and image sensors
NASA Astrophysics Data System (ADS)
Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-06-06
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-01-01
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
Object positioning in storages of robotized workcells using LabVIEW Vision
NASA Astrophysics Data System (ADS)
Hryniewicz, P.; Banaś, W.; Sękala, A.; Gwiazda, A.; Foit, K.; Kost, G.
2015-11-01
During the manufacturing process, each performed task is previously developed and adapted to the conditions and the possibilities of the manufacturing plant. The production process is supervised by a team of specialists because any downtime causes great loss of time and hence financial loss. Sensors used in industry for tracking and supervision various stages of a production process make it much easier to maintain it continuous. One of groups of sensors used in industrial applications are non-contact sensors. This group includes: light barriers, optical sensors, rangefinders, vision systems, and ultrasonic sensors. Through to the rapid development of electronics the vision systems were widespread as the most flexible type of non-contact sensors. These systems consist of cameras, devices for data acquisition, devices for data analysis and specialized software. Vision systems work well as sensors that control the production process itself as well as the sensors that control the product quality level. The LabVIEW program as well as the LabVIEW Vision and LabVIEW Builder represent the application that enables program the informatics system intended to process and product quality control. The paper presents elaborated application for positioning elements in a robotized workcell. Basing on geometric parameters of manipulated object or on the basis of previously developed graphical pattern it is possible to determine the position of particular manipulated elements. This application could work in an automatic mode and in real time cooperating with the robot control system. It allows making the workcell functioning more autonomous.
Automated site characterization for robotic sample acquisition systems
NASA Astrophysics Data System (ADS)
Scholl, Marija S.; Eberlein, Susan J.
1993-04-01
A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.
Ground vehicle control at NIST: From teleoperation to autonomy
NASA Technical Reports Server (NTRS)
Murphy, Karl N.; Juberts, Maris; Legowik, Steven A.; Nashman, Marilyn; Schneiderman, Henry; Scott, Harry A.; Szabo, Sandor
1994-01-01
NIST is applying their Real-time Control System (RCS) methodology for control of ground vehicles for both the U.S. Army Researh Lab, as part of the DOD's Unmanned Ground Vehicles program, and for the Department of Transportation's Intelligent Vehicle/Highway Systems (IVHS) program. The actuated vehicle, a military HMMWV, has motors for steering, brake, throttle, etc. and sensors for the dashboard gauges. For military operations, the vehicle has two modes of operation: a teleoperation mode--where an operator remotely controls the vehicle over an RF communications network; and a semi-autonomous mode called retro-traverse--where the control system uses an inertial navigation system to steer the vehicle along a prerecorded path. For the IVHS work, intelligent vision processing elements replace the human teleoperator to achieve autonomous, visually guided road following.
NASA Astrophysics Data System (ADS)
Qiu, Zhi-cheng; Wang, Xian-feng; Zhang, Xian-Min; Liu, Jin-guo
2018-07-01
A novel non-contact vibration measurement method using binocular vision sensors is proposed for piezoelectric flexible hinged plate. Decoupling methods of the bending and torsional low frequency vibration on measurement and driving control are investigated, using binocular vision sensors and piezoelectric actuators. A radial basis function neural network controller (RBFNNC) is designed to suppress both the larger and the smaller amplitude vibrations. To verify the non-contact measurement method and the designed controller, an experimental setup of the flexible hinged plate with binocular vision is constructed. Experiments on vibration measurement and control are conducted by using binocular vision sensors and the designed RBFNNC controllers, compared with the classical proportional and derivative (PD) control algorithm. The experimental measurement results demonstrate that the binocular vision sensors can detect the low-frequency bending and torsional vibration effectively. Furthermore, the designed RBF can suppress the bending vibration more quickly than the designed PD controller owing to the adjustment of the RBF control, especially for the small amplitude residual vibrations.
Enhanced computer vision with Microsoft Kinect sensor: a review.
Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie
2013-10-01
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.
Stereo-vision-based terrain mapping for off-road autonomous navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-05-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-01-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Libration Point Navigation Concepts Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Quinn, David A.
2004-01-01
This work examines the autonomous navigation accuracy achievable for a lunar exploration trajectory from a translunar libration point lunar navigation relay satellite, augmented by signals from the Global Positioning System (GPS). We also provide a brief analysis comparing the libration point relay to lunar orbit relay architectures, and discuss some issues of GPS usage for cis-lunar trajectories.
Bio-Inspired Polarized Skylight-Based Navigation Sensors: A Review
Karman, Salmah B.; Diah, S. Zaleha M.; Gebeshuber, Ille C.
2012-01-01
Animal senses cover a broad range of signal types and signal bandwidths and have inspired various sensors and bioinstrumentation devices for biological and medical applications. Insects, such as desert ants and honeybees, for example, utilize polarized skylight pattern-based information in their navigation activities. They reliably return to their nests and hives from places many kilometers away. The insect navigation system involves the dorsal rim area in their compound eyes and the corresponding polarization sensitive neurons in the brain. The dorsal rim area is equipped with photoreceptors, which have orthogonally arranged small hair-like structures termed microvilli. These are the specialized sensors for the detection of polarized skylight patterns (e-vector orientation). Various research groups have been working on the development of novel navigation systems inspired by polarized skylight-based navigation in animals. Their major contributions are critically reviewed. One focus of current research activities is on imitating the integration path mechanism in desert ants. The potential for simple, high performance miniaturized bioinstrumentation that can assist people in navigation will be explored. PMID:23202158
Bio-inspired polarized skylight-based navigation sensors: a review.
Karman, Salmah B; Diah, S Zaleha M; Gebeshuber, Ille C
2012-10-24
Animal senses cover a broad range of signal types and signal bandwidths and have inspired various sensors and bioinstrumentation devices for biological and medical applications. Insects, such as desert ants and honeybees, for example, utilize polarized skylight pattern-based information in their navigation activities. They reliably return to their nests and hives from places many kilometers away. The insect navigation system involves the dorsal rim area in their compound eyes and the corresponding polarization sensitive neurons in the brain. The dorsal rim area is equipped with photoreceptors, which have orthogonally arranged small hair-like structures termed microvilli. These are the specialized sensors for the detection of polarized skylight patterns (e-vector orientation). Various research groups have been working on the development of novel navigation systems inspired by polarized skylight-based navigation in animals. Their major contributions are critically reviewed. One focus of current research activities is on imitating the integration path mechanism in desert ants. The potential for simple, high performance miniaturized bioinstrumentation that can assist people in navigation will be explored.
Navigation of military and space unmanned ground vehicles in unstructured terrains
NASA Technical Reports Server (NTRS)
Lescoe, Paul; Lavery, David; Bedard, Roger
1991-01-01
Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.
Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus
2016-01-01
In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn. PMID:27801834
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus
2016-10-28
In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.
Distributed Ship Navigation Control System Based on Dual Network
NASA Astrophysics Data System (ADS)
Yao, Ying; Lv, Wu
2017-10-01
Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.
Multi-sensor Navigation System Design
DOT National Transportation Integrated Search
1971-03-01
This report treats the design of naviggation systems that collect data from two or more on-board measurement subsystems and precess this data in an on-board computer. Such systems are called Multi-sensor Navigation Systems. : The design begins with t...
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles
Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F.
2016-01-01
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results. PMID:27649203
Flight Test Result for the Ground-Based Radio Navigation System Sensor with an Unmanned Air Vehicle
Jang, Jaegyu; Ahn, Woo-Guen; Seo, Seungwoo; Lee, Jang Yong; Park, Jun-Pyo
2015-01-01
The Ground-based Radio Navigation System (GRNS) is an alternative/backup navigation system based on time synchronized pseudolites. It has been studied for some years due to the potential vulnerability issue of satellite navigation systems (e.g., GPS or Galileo). In the framework of our study, a periodic pulsed sequence was used instead of the randomized pulse sequence recommended as the RTCM (radio technical commission for maritime services) SC (special committee)-104 pseudolite signal, as a randomized pulse sequence with a long dwell time is not suitable for applications requiring high dynamics. This paper introduces a mathematical model of the post-correlation output in a navigation sensor, showing that the aliasing caused by the additional frequency term of a periodic pulsed signal leads to a false lock (i.e., Doppler frequency bias) during the signal acquisition process or in the carrier tracking loop of the navigation sensor. We suggest algorithms to resolve the frequency false lock issue in this paper, relying on the use of a multi-correlator. A flight test with an unmanned helicopter was conducted to verify the implemented navigation sensor. The results of this analysis show that there were no false locks during the flight test and that outliers stem from bad dilution of precision (DOP) or fluctuations in the received signal quality. PMID:26569251
NASA Astrophysics Data System (ADS)
Celik, Koray
This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.
Vision-Based Traffic Data Collection Sensor for Automotive Applications
Llorca, David F.; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel. A.
2010-01-01
This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry. PMID:22315572
Vision-based traffic data collection sensor for automotive applications.
Llorca, David F; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel A
2010-01-01
This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry.
Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review
Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul
2012-01-01
Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
A LEO Satellite Navigation Algorithm Based on GPS and Magnetometer Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Bauer, Frank H. (Technical Monitor)
2000-01-01
The Global Positioning System (GPS) has become a standard method for low cost onboard satellite orbit determination. The use of a GPS receiver as an attitude and rate sensor has also been developed in the recent past. Additionally, focus has been given to attitude and orbit estimation using the magnetometer, a low cost, reliable sensor. Combining measurements from both GPS and a magnetometer can provide a robust navigation system that takes advantage of the estimation qualities of both measurements. Ultimately a low cost, accurate navigation system can result, potentially eliminating the need for more costly sensors, including gyroscopes.
INS/GNSS Integration for Aerobatic Flight Applications and Aircraft Motion Surveying.
V Hinüber, Edgar L; Reimer, Christian; Schneider, Tim; Stock, Michael
2017-04-26
This paper presents field tests of challenging flight applications obtained with a new family of lightweight low-power INS/GNSS ( inertial navigation system/global satellite navigation system ) solutions based on MEMS ( micro-electro-mechanical- sensor ) machined sensors, being used for UAV ( unmanned aerial vehicle ) navigation and control as well as for aircraft motion dynamics analysis and trajectory surveying. One key is a 42+ state extended Kalman-filter-based powerful data fusion, which also allows the estimation and correction of parameters that are typically affected by sensor aging, especially when applying MEMS-based inertial sensors, and which is not yet deeply considered in the literature. The paper presents the general system architecture, which allows iMAR Navigation the integration of all classes of inertial sensors and GNSS ( global navigation satellite system ) receivers from very-low-cost MEMS and high performance MEMS over FOG ( fiber optical gyro ) and RLG ( ring laser gyro ) up to HRG ( hemispherical resonator gyro ) technology, and presents detailed flight test results obtained under extreme flight conditions. As a real-world example, the aerobatic maneuvers of the World Champion 2016 (Red Bull Air Race) are presented. Short consideration is also given to surveying applications, where the ultimate performance of the same data fusion, but applied on gravimetric surveying, is discussed.
Tawk, Youssef; Tomé, Phillip; Botteron, Cyril; Stebler, Yannick; Farine, Pierre-André
2014-01-01
The use of global navigation satellite system receivers for navigation still presents many challenges in urban canyon and indoor environments, where satellite availability is typically reduced and received signals are attenuated. To improve the navigation performance in such environments, several enhancement methods can be implemented. For instance, external aid provided through coupling with other sensors has proven to contribute substantially to enhancing navigation performance and robustness. Within this context, coupling a very simple GPS receiver with an Inertial Navigation System (INS) based on low-cost micro-electro-mechanical systems (MEMS) inertial sensors is considered in this paper. In particular, we propose a GPS/INS Tightly Coupled Assisted PLL (TCAPLL) architecture, and present most of the associated challenges that need to be addressed when dealing with very-low-performance MEMS inertial sensors. In addition, we propose a data monitoring system in charge of checking the quality of the measurement flow in the architecture. The implementation of the TCAPLL is discussed in detail, and its performance under different scenarios is assessed. Finally, the architecture is evaluated through a test campaign using a vehicle that is driven in urban environments, with the purpose of highlighting the pros and cons of combining MEMS inertial sensors with GPS over GPS alone. PMID:24569773
INS/GNSS Integration for Aerobatic Flight Applications and Aircraft Motion Surveying
v. Hinüber, Edgar L.; Reimer, Christian; Schneider, Tim; Stock, Michael
2017-01-01
This paper presents field tests of challenging flight applications obtained with a new family of lightweight low-power INS/GNSS (inertial navigation system/global satellite navigation system) solutions based on MEMS (micro-electro-mechanical- sensor) machined sensors, being used for UAV (unmanned aerial vehicle) navigation and control as well as for aircraft motion dynamics analysis and trajectory surveying. One key is a 42+ state extended Kalman-filter-based powerful data fusion, which also allows the estimation and correction of parameters that are typically affected by sensor aging, especially when applying MEMS-based inertial sensors, and which is not yet deeply considered in the literature. The paper presents the general system architecture, which allows iMAR Navigation the integration of all classes of inertial sensors and GNSS (global navigation satellite system) receivers from very-low-cost MEMS and high performance MEMS over FOG (fiber optical gyro) and RLG (ring laser gyro) up to HRG (hemispherical resonator gyro) technology, and presents detailed flight test results obtained under extreme flight conditions. As a real-world example, the aerobatic maneuvers of the World Champion 2016 (Red Bull Air Race) are presented. Short consideration is also given to surveying applications, where the ultimate performance of the same data fusion, but applied on gravimetric surveying, is discussed. PMID:28445417
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors.
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.
1977-01-01
Flight test results of the strapdown inertial reference unit (SIRU) navigation system are presented. The fault-tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance.
Millimeter wave sensor requirements for maritime small craft identification
NASA Astrophysics Data System (ADS)
Krapels, Keith; Driggers, Ronald G.; Garcia, Jose; Boettcher, Evelyn; Prather, Dennis; Schuetz, Chrisopher; Samluk, Jesse; Stein, Lee; Kiser, William; Visnansky, Andrew; Grata, Jeremy; Wikner, David; Harris, Russ
2009-09-01
Passive millimeter wave (mmW) imagers have improved in terms of resolution sensitivity and frame rate. Currently, the Office of Naval Research (ONR), along with the US Army Research, Development and Engineering Command, Communications Electronics Research Development and Engineering Center (RDECOM CERDEC) Night Vision and Electronic Sensor Directorate (NVESD), are investigating the current state-of-the-art of mmW imaging systems. The focus of this study was the performance of mmW imaging systems for the task of small watercraft / boat identification field performance. First mmW signatures were collected. This consisted of a set of eight small watercrafts; at 5 different aspects, during the daylight hours over a 48 hour period in the spring of 2008. Target characteristics were measured and characteristic dimension, signatures, and Root Sum Squared of Target's Temperature (RRSΔT) tabulated. Then an eight-alternative, forced choice (8AFC) human perception experiment was developed and conducted at NVESD. The ability of observers to discriminate between small watercraft was quantified. Next, the task difficulty criterion, V50, was quantified by applying this data to NVESD's target acquisition models using the Targeting Task Performance (TTP) metric. These parameters can be used to evaluate sensor field performance for Anti-Terrorism / Force Protection (AT/FP) and navigation tasks for the U.S. Navy, as well as for design and evaluation of imaging passive mmW sensors for both the U.S. Navy and U.S. Coast Guard.
NASA Technical Reports Server (NTRS)
Milenkovic, Zoran; DSouza, Christopher; Huish, David; Bendle, John; Kibler, Angela
2012-01-01
The exploration goals of Orion / MPCV Project will require a mature Rendezvous, Proximity Operations and Docking (RPOD) capability. Ground testing autonomous docking with a next-generation sensor such as the Vision Navigation Sensor (VNS) is a critical step along the path of ensuring successful execution of autonomous RPOD for Orion. This paper will discuss the testing rationale, the test configuration, the test limitations and the results obtained from tests that have been performed at the Lockheed Martin Space Operations Simulation Center (SOSC) to evaluate and mature the Orion RPOD system. We will show that these tests have greatly increased the confidence in the maturity of the Orion RPOD design, reduced some of the latent risks and in doing so validated the design philosophy of the Orion RPOD system. This paper is organized as follows: first, the objectives of the test are given. Descriptions of the SOSC facility, and the Orion RPOD system and associated components follow. The details of the test configuration of the components in question are presented prior to discussing preliminary results of the tests. The paper concludes with closing comments.
A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection
D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin
1993-01-01
A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...
All Source Sensor Integration Using an Extended Kalman Filter
2012-03-22
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 All...Positioning System . . . . . . . . . . . . . . . . . . 1 ASPN All Source Positioning Navigation . . . . . . . . . . . . . . 2 DARPA Defense Advanced...equations are developed for sensor preprocessed mea- 1 surements, and these navigation equations are not dependent upon the integrating filter. That is
Practical design and evaluation methods of omnidirectional vision sensors
NASA Astrophysics Data System (ADS)
Ohte, Akira; Tsuzuki, Osamu
2012-01-01
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
2D/3D Synthetic Vision Navigation Display
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.
2008-01-01
Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph
Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe
2017-01-01
An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570
An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.
Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe
2017-03-21
An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.
Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association
2011-03-24
based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently
Evolving EO-1 Sensor Web Testbed Capabilities in Pursuit of GEOSS
NASA Technical Reports Server (NTRS)
Mandi, Dan; Ly, Vuong; Frye, Stuart; Younis, Mohamed
2006-01-01
A viewgraph presentation to evolve sensor web capabilities in pursuit of capabilities to support Global Earth Observing System of Systems (GEOSS) is shown. The topics include: 1) Vision to Enable Sensor Webs with "Hot Spots"; 2) Vision Extended for Communication/Control Architecture for Missions to Mars; 3) Key Capabilities Implemented to Enable EO-1 Sensor Webs; 4) One of Three Experiments Conducted by UMBC Undergraduate Class 12-14-05 (1 - 3); 5) Closer Look at our Mini-Rovers and Simulated Mars Landscae at GSFC; 6) Beginning to Implement Experiments with Standards-Vision for Integrated Sensor Web Environment; 7) Goddard Mission Services Evolution Center (GMSEC); 8) GMSEC Component Catalog; 9) Core Flight System (CFS) and Extension for GMSEC for Flight SW; 10) Sensor Modeling Language; 11) Seamless Ground to Space Integrated Message Bus Demonstration (completed December 2005); 12) Other Experiments in Queue; 13) Acknowledgements; and 14) References.
NASA Technical Reports Server (NTRS)
Bishop, Robert H.; DeMars, Kyle; Trawny, Nikolas; Crain, Tim; Hanak, Chad; Carson, John M.; Christian, John
2016-01-01
The navigation filter architecture successfully deployed on the Morpheus flight vehicle is presented. The filter was developed as a key element of the NASA Autonomous Landing and Hazard Avoidance Technology (ALHAT) project and over the course of 15 free fights was integrated into the Morpheus vehicle, operations, and flight control loop. Flight testing completed by demonstrating autonomous hazard detection and avoidance, integration of an altimeter, surface relative velocity (velocimeter) and hazard relative navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman flter software, and landing within 2 meters of the vertical testbed GPS-based navigation solution at the safe landing site target. Morpheus followed a trajectory that included an ascent phase followed by a partial descent-to-landing, although the proposed filter architecture is applicable to more general planetary precision entry, descent, and landings. The main new contribution is the incorporation of a sophisticated hazard relative navigation sensor-originally intended to locate safe landing sites-into the navigation system and employed as a navigation sensor. The formulation of a dual-state inertial extended Kalman filter was designed to address the precision planetary landing problem when viewed as a rendezvous problem with an intended landing site. For the required precision navigation system that is capable of navigating along a descent-to-landing trajectory to a precise landing, the impact of attitude errors on the translational state estimation are included in a fully integrated navigation structure in which translation state estimation is combined with attitude state estimation. The map tie errors are estimated as part of the process, thereby creating a dual-state filter implementation. Also, the filter is implemented using inertial states rather than states relative to the target. External measurements include altimeter, velocimeter, star camera, terrain relative navigation sensor, and a hazard relative navigation sensor providing information regarding hazards on a map generated on-the-fly.
A new method for determining which stars are near a star sensor field-of-view
NASA Technical Reports Server (NTRS)
Yates, Russell E., Jr.; Vedder, John D.
1991-01-01
A new method is described for determining which stars in a navigation star catalog are near a star sensor field of view (FOV). This method assumes that an estimate of spacecraft inertial attitude is known. Vector component ranges for the star sensor FOV are computed, so that stars whose vector components lie within these ranges are near the star sensor FOV. This method requires no presorting of the navigation star catalog, and is more efficient than tradition methods.
Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks
NASA Astrophysics Data System (ADS)
Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min
2015-10-01
Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
Gao, Wei; Zhang, Ya; Wang, Jianguo
2014-01-01
The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842
Tele-auscultation support system with mixed reality navigation.
Hori, Kenta; Uchida, Yusuke; Kan, Tsukasa; Minami, Maya; Naito, Chisako; Kuroda, Tomohiro; Takahashi, Hideya; Ando, Masahiko; Kawamura, Takashi; Kume, Naoto; Okamoto, Kazuya; Takemura, Tadamasa; Yoshihara, Hiroyuki
2013-01-01
The aim of this research is to develop an information support system for tele-auscultation. In auscultation, a doctor requires to understand condition of applying a stethoscope, in addition to auscultatory sounds. The proposed system includes intuitive navigation system of stethoscope operation, in addition to conventional audio streaming system of auscultatory sounds and conventional video conferencing system for telecommunication. Mixed reality technology is applied for intuitive navigation of the stethoscope. Information, such as position, contact condition and breath, is overlaid on a view of the patient's chest. The contact condition of the stethoscope is measured by e-textile contact sensors. The breath is measured by a band type breath sensor. In a simulated tele-auscultation experiment, the stethoscope with the contact sensors and the breath sensor were evaluated. The results show that the presentation of the contact condition was not understandable enough for navigating the stethoscope handling. The time series of the breath phases was usable for the remote doctor to understand the breath condition of the patient.
Flight test results of the strapdown hexad inertial reference unit (SIRU). Volume 2: Test report
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.
1977-01-01
Results of flight tests of the Strapdown Inertial Reference Unit (SIRU) navigation system are presented. The fault tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance. Performance shortcomings are analyzed.
Space Shuttle Earth Observation sensors pointing and stabilization requirements study
NASA Technical Reports Server (NTRS)
1976-01-01
The shuttle orbiter inertial measurement unit (IMU), located in the orbiter cabin, is used to supply inertial attitude reference signals; and, in conjunction with the onboard navigation system, can provide a pointing capability of the navigation base accurate to within plus or minus 0.5 deg for earth viewing missions. This pointing accuracy can degrade to approximately plus or minus 2.0 deg for payloads located in the aft bay due to structural flexure of the shuttle vehicle, payload structural and mounting misalignments, and calibration errors with respect to the navigation base. Drawbacks to obtaining pointing accuracy by using the orbiter RCS jets are discussed. Supplemental electromechanical pointing systems are developed to provide independent pointing for individual sensors, or sensor groupings. The missions considered and the sensors required for these missions and the parameters of each sensor are described. Assumptions made to derive pointing and stabilization requirements are delineated.
Soldier-worn augmented reality system for tactical icon visualization
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared
2012-06-01
This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Learning Long-Range Vision for an Offroad Robot
2008-09-01
robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Exploration, anxiety, and spatial memory in transgenic anophthalmic mice.
Buhot, M C; Dubayle, D; Malleret, G; Javerzat, S; Segu, L
2001-04-01
Contradictory results are found in the literature concerning the role of vision in the perception of space or in spatial navigation, in part because of the lack of murine models of total blindness used so far. The authors evaluated the spatial abilities of anophthalmic transgenic mice. These mice did not differ qualitatively from their wild-type littermates in general locomotor activity, spontaneous alternation, object exploration, or anxiety, but their level of exploratory activity was generally lower. In the spatial version of the water maze, they displayed persistent thigmotaxic behavior and showed severe spatial learning impairments. However, their performances improved with training, suggesting that they may have acquired a rough representation of the platform position. These results suggest that modalities other than vision enable some degree of spatial processing in proximal and structured spaces but that vision is critical for accurate spatial navigation.
Polarization Imaging and Insect Vision
ERIC Educational Resources Information Center
Green, Adam S.; Ohmann, Paul R.; Leininger, Nick E.; Kavanaugh, James A.
2010-01-01
For several years we have included discussions about insect vision in the optics units of our introductory physics courses. This topic is a natural extension of demonstrations involving Brewster's reflection and Rayleigh scattering of polarized light because many insects heavily rely on optical polarization for navigation and communication.…
Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter.
Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei
2016-11-02
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system's error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts.
Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter
Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei
2016-01-01
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts. PMID:27827832
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Simulation analysis of a microcomputer-based, low-cost Omega navigation system
NASA Technical Reports Server (NTRS)
Lilley, R. W.; Salter, R. J., Jr.
1976-01-01
The current status of research on a proposed micro-computer-based, low-cost Omega Navigation System (ONS) is described. The design approach emphasizes minimum hardware, maximum software, and the use of a low-cost, commercially-available microcomputer. Currently under investigation is the implementation of a low-cost navigation processor and its interface with an omega sensor to complete the hardware-based ONS. Sensor processor functions are simulated to determine how many of the sensor processor functions can be handled by innovative software. An input data base of live Omega ground and flight test data was created. The Omega sensor and microcomputer interface modules used to collect the data are functionally described. Automatic synchronization to the Omega transmission pattern is described as an example of the algorithms developed using this data base.
Integration of a 3D perspective view in the navigation display: featuring pilot's mental model
NASA Astrophysics Data System (ADS)
Ebrecht, L.; Schmerwitz, S.
2015-05-01
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
Space shuttle onboard navigation console expert/trainer system
NASA Technical Reports Server (NTRS)
Wang, Lui; Bochsler, Dan
1987-01-01
A software system for use in enhancing operational performance as well as training ground controllers in monitoring onboard Space Shuttle navigation sensors is described. The Onboard Navigation (ONAV) development reflects a trend toward following a structured and methodical approach to development. The ONAV system must deal with integrated conventional and expert system software, complex interfaces, and implementation limitations due to the target operational environment. An overview of the onboard navigation sensor monitoring function is presented, along with a description of guidelines driving the development effort, requirements that the system must meet, current progress, and future efforts.
Robonaut Mobile Autonomy: Initial Experiments
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.
2006-01-01
A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
A novel visual-inertial monocular SLAM
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo
2018-02-01
With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.
NASA Technical Reports Server (NTRS)
Mitchell, Jennifer D.; Cryan, Scott P.; Baker, Kenneth; Martin, Toby; Goode, Robert; Key, Kevin W.; Manning, Thomas; Chien, Chiun-Hong
2008-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as Automated Rendezvous and Docking, AR&D). The crewed versions may also perform AR&D, possibly with a different level of automation and/or autonomy, and must also provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Constellation Program; this is carried as one of the CEV Project top risks. The Exploration Technology Development Program (ETDP) AR&D Sensor Technology Project seeks to reduce this risk by increasing technology maturation of selected relative navigation sensor technologies through testing and simulation. One of the project activities is a series of "pathfinder" testing and simulation activities to integrate relative navigation sensors with the Johnson Space Center Six-Degree-of-Freedom Test System (SDTS). The SDTS will be the primary testing location for the Orion spacecraft s Low Impact Docking System (LIDS). Project team members have integrated the Orion simulation with the SDTS computer system so that real-time closed loop testing can be performed with relative navigation sensors and the docking system in the loop during docking and undocking scenarios. Two relative navigation sensors are being used as part of a "pathfinder" activity in order to pave the way for future testing with the actual Orion sensors. This paper describes the test configuration and test results.
[Impairment of safety in navigation caused by alcohol: impact on visual function].
Grütters, G; Reichelt, J A; Ritz-Timme, S; Thome, M; Kaatsch, H J
2003-05-01
So far in Germany, no legally binding standards for blood alcohol concentration exist that prove an impairment of navigability. The aim of our interdisciplinary project was to obtain data in order to identify critical blood alcohol limits. In this context the visual system seems to be of decisive importance. 21 professional skippers underwent realistic navigational demands soberly and alcoholized in a sea traffic simulator. The following parameters were considered: visual acuity, stereopsis, color vision, and accommodation. Under the influence of alcohol (average blood alcohol concentration: 1.08 per thousand ) each skipper considered himself to be completely capable of navigating. While simulations were running, all of the skippers made nautical mistakes or underestimated dangerous situations. Severe impairment in visual acuity or binocular function were not observed. Accommodation decreased by an average of 18% ( p=0.0001). In the test of color vision skippers made more mistakes ( p=0.017) and the time needed for this test was prolonged ( p=0.004). Changes in visual function as well as vegetative and psychological reactions could be the cause of mistakes and alcohol should therefore be regarded as a severe risk factor for security in sea navigation.
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab
NASA Technical Reports Server (NTRS)
Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.
2008-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL) using the FRL's 6-DOF gantry system, called the Dynamic Overhead Target System (DOTS). The target vehicle for "docking" in the laboratory was a mockup that was representative of the proposed CEV docking system, with added retroreflectors for the AVGS.' The multi-sensor test configuration used 35 open-loop test trajectories covering three major objectives: (l) sensor characterization trajectories designed to test a wide range of performance parameters; (2) CEV-specific trajectories designed to test performance during CEV-like approach and departure profiles; and (3) sensor characterization tests designed for evaluating sensor performance under more extreme conditions as might be induced during a spacecraft failure or during contingency situations. This paper describes the test development, test facility, test preparations, test execution, and test results of the multisensor series oftrajectories
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field. PMID:27065784
Context-Aware Personal Navigation Using Embedded Sensor Fusion in Smartphones
Saeedi, Sara; Moussa, Adel; El-Sheimy, Naser
2014-01-01
Context-awareness is an interesting topic in mobile navigation scenarios where the context of the application is highly dynamic. Using context-aware computing, navigation services consider the situation of user, not only in the design process, but in real time while the device is in use. The basic idea is that mobile navigation services can provide different services based on different contexts—where contexts are related to the user's activity and the device placement. Context-aware systems are concerned with the following challenges which are addressed in this paper: context acquisition, context understanding, and context-aware application adaptation. The proposed approach in this paper is using low-cost sensors in a multi-level fusion scheme to improve the accuracy and robustness of context-aware navigation system. The experimental results demonstrate the capabilities of the context-aware Personal Navigation Systems (PNS) for outdoor personal navigation using a smartphone. PMID:24670715
Context-aware personal navigation using embedded sensor fusion in smartphones.
Saeedi, Sara; Moussa, Adel; El-Sheimy, Naser
2014-03-25
Context-awareness is an interesting topic in mobile navigation scenarios where the context of the application is highly dynamic. Using context-aware computing, navigation services consider the situation of user, not only in the design process, but in real time while the device is in use. The basic idea is that mobile navigation services can provide different services based on different contexts-where contexts are related to the user's activity and the device placement. Context-aware systems are concerned with the following challenges which are addressed in this paper: context acquisition, context understanding, and context-aware application adaptation. The proposed approach in this paper is using low-cost sensors in a multi-level fusion scheme to improve the accuracy and robustness of context-aware navigation system. The experimental results demonstrate the capabilities of the context-aware Personal Navigation Systems (PNS) for outdoor personal navigation using a smartphone.
Teaching with Vision: Culturally Responsive Teaching in Standards-Based Classrooms
ERIC Educational Resources Information Center
Sleeter, Christine E., Ed.; Cornbleth, Catherine, Ed.
2011-01-01
In "Teaching with Vision," two respected scholars in teaching for social justice have gathered teachers from across the country to describe rich examples of extraordinary practice. This collection showcases the professional experience and wisdom of classroom teachers who have been navigating standards- and test-driven teaching environments in…
Luo, Xiongbiao
2014-06-01
Various bronchoscopic navigation systems are developed for diagnosis, staging, and treatment of lung and bronchus cancers. To construct electromagnetically navigated bronchoscopy systems, registration of preoperative images and an electromagnetic tracker must be performed. This paper proposes a new marker-free registration method, which uses the centerlines of the bronchial tree and the center of a bronchoscope tip where an electromagnetic sensor is attached, to align preoperative images and electromagnetic tracker systems. The chest computed tomography (CT) volume (preoperative images) was segmented to extract the bronchial centerlines. An electromagnetic sensor was fixed at the bronchoscope tip surface. A model was designed and printed using a 3D printer to calibrate the relationship between the fixed sensor and the bronchoscope tip center. For each sensor measurement that includes sensor position and orientation information, its corresponding bronchoscope tip center position was calculated. By minimizing the distance between each bronchoscope tip center position and the bronchial centerlines, the spatial alignment of the electromagnetic tracker system and the CT volume was determined. After obtaining the spatial alignment, an electromagnetic navigation bronchoscopy system was established to real-timely track or locate a bronchoscope inside the bronchial tree during bronchoscopic examinations. The electromagnetic navigation bronchoscopy system was validated on a dynamic bronchial phantom that can simulate respiratory motion with a breath rate range of 0-10 min(-1). The fiducial and target registration errors of this navigation system were evaluated. The average fiducial registration error was reduced from 8.7 to 6.6 mm. The average target registration error, which indicates all tracked or navigated bronchoscope position accuracy, was much reduced from 6.8 to 4.5 mm compared to previous registration methods. An electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xiongbiao, E-mail: xiongbiao.luo@gmail.com
2014-06-15
Purpose: Various bronchoscopic navigation systems are developed for diagnosis, staging, and treatment of lung and bronchus cancers. To construct electromagnetically navigated bronchoscopy systems, registration of preoperative images and an electromagnetic tracker must be performed. This paper proposes a new marker-free registration method, which uses the centerlines of the bronchial tree and the center of a bronchoscope tip where an electromagnetic sensor is attached, to align preoperative images and electromagnetic tracker systems. Methods: The chest computed tomography (CT) volume (preoperative images) was segmented to extract the bronchial centerlines. An electromagnetic sensor was fixed at the bronchoscope tip surface. A model wasmore » designed and printed using a 3D printer to calibrate the relationship between the fixed sensor and the bronchoscope tip center. For each sensor measurement that includes sensor position and orientation information, its corresponding bronchoscope tip center position was calculated. By minimizing the distance between each bronchoscope tip center position and the bronchial centerlines, the spatial alignment of the electromagnetic tracker system and the CT volume was determined. After obtaining the spatial alignment, an electromagnetic navigation bronchoscopy system was established to real-timely track or locate a bronchoscope inside the bronchial tree during bronchoscopic examinations. Results: The electromagnetic navigation bronchoscopy system was validated on a dynamic bronchial phantom that can simulate respiratory motion with a breath rate range of 0–10 min{sup −1}. The fiducial and target registration errors of this navigation system were evaluated. The average fiducial registration error was reduced from 8.7 to 6.6 mm. The average target registration error, which indicates all tracked or navigated bronchoscope position accuracy, was much reduced from 6.8 to 4.5 mm compared to previous registration methods. Conclusions: An electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.« less
2011-03-09
anu.edu.au Nocturnal visual orientation in flying insects: a benchmark for the design of vision-based sensors in Micro-Aerial Vehicles Report...9 10 Technical horizon sensors Over the past few years, a remarkable proliferation of designs for micro-aerial vehicles (MAVs) has occurred...possible elevations, it may severely degrade the performance of sensors by local saturation. Therefore it is necessary to find a method whereby the effect
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
Ganji, Yusof; Janabi-Sharifi, Farrokh; Cheema, Asim N
2011-12-01
Despite the recent advances in catheter design and technology, intra-cardiac navigation during electrophysiology procedures remains challenging. Incorporation of imaging along with magnetic or robotic guidance may improve navigation accuracy and procedural safety. In the present study, the in vivo performance of a novel remote controlled Robot Assisted Cardiac Navigation System (RACN) was evaluated in a porcine model. The navigation catheter and target sensor were advanced to the right atrium using fluoroscopic and intra-cardiac echo guidance. The target sensor was positioned at three target locations in the right atrium (RA) and the navigation task was completed by an experienced physician using both manual and RACN guidance. The navigation time, final distance between the catheter tip and target sensor, and variability in final catheter tip position were determined and compared for manual and RACN guided navigation. The experiments were completed in three animals and five measurements recorded for each target location. The mean distance (mm) between catheter tip and target sensor at the end of the navigation task was significantly less using RACN guidance compared with manual navigation (5.02 ± 0.31 vs. 9.66 ± 2.88, p = 0.050 for high RA, 9.19 ± 1.13 vs. 13.0 ± 1.00, p = 0.011 for low RA and 6.77 ± 0.59 vs. 15.66 ± 2.51, p = 0.003 for tricuspid valve annulus). The average time (s) needed to complete the navigation task was significantly longer by RACN guided navigation compared with manual navigation (43.31 ± 18.19 vs. 13.54 ± 1.36, p = 0.047 for high RA, 43.71 ± 11.93 vs. 22.71 ± 3.79, p = 0.043 for low RA and 37.84 ± 3.71 vs. 16.13 ± 4.92, p = 0.003 for tricuspid valve annulus. RACN guided navigation resulted in greater consistency in performance compared with manual navigation as evidenced by lower variability in final distance measurements (0.41 vs. 0.99 mm, p = 0.04). This study demonstrated the safety and feasibility of the RACN system for cardiac navigation. The results demonstrated that RACN performed comparably with manual navigation, with improved precision and consistency for targets located in and near the right atrial chamber. Copyright © 2011 John Wiley & Sons, Ltd.
Performance Characteristic Mems-Based IMUs for UAVs Navigation
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.
2015-08-01
Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.
2010-11-01
3-10 Multiple Images of an Image Sequence Figure 3-10 A Digital Magnetic Compass from KVH Industries 3-11 Figure 3-11 Earth’s Magnetic Field 3-11...ARINO SENER – Ingenieria y Sistemas S.A Aerospace Division Parque Tecnologico de Madrid Calle Severo Ocho 4 28760 Tres Cantos Madrid Email...experts from government, academia, industry and the military produced an analysis of future navigation sensors and systems whose performance
Pre-shaping of the Fingertip of Robot Hand Covered with Net Structure Proximity Sensor
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Suzuki, Yosuke; Hasegawa, Hiroaki; Ming, Aiguo; Ishikawa, Masatoshi; Shimojo, Makoto
To achieve skillful tasks with multi-fingered robot hands, many researchers have been working on sensor-based control of them. Vision sensors and tactile sensors are indispensable for the tasks, however, the correctness of the information from the vision sensors decreases as a robot hand approaches to a grasping object because of occlusion. This research aims to achieve seamless detection for reliable grasp by use of proximity sensors: correcting the positional error of the hand in vision-based approach, and contacting the fingertip in the posture for effective tactile sensing. In this paper, we propose a method for adjusting the posture of the fingertip to the surface of the object. The method applies “Net-Structure Proximity Sensor” on the fingertip, which can detect the postural error in the roll and pitch axes between the fingertip and the object surface. The experimental result shows that the postural error is corrected in the both axes even if the object dynamically rotates.
Evaluation of Candidate Millimeter Wave Sensors for Synthetic Vision
NASA Technical Reports Server (NTRS)
Alexander, Neal T.; Hudson, Brian H.; Echard, Jim D.
1994-01-01
The goal of the Synthetic Vision Technology Demonstration Program was to demonstrate and document the capabilities of current technologies to achieve safe aircraft landing, take off, and ground operation in very low visibility conditions. Two of the major thrusts of the program were (1) sensor evaluation in measured weather conditions on a tower overlooking an unused airfield and (2) flight testing of sensor and pilot performance via a prototype system. The presentation first briefly addresses the overall technology thrusts and goals of the program and provides a summary of MMW sensor tower-test and flight-test data collection efforts. Data analysis and calibration procedures for both the tower tests and flight tests are presented. The remainder of the presentation addresses the MMW sensor flight-test evaluation results, including the processing approach for determination of various performance metrics (e.g., contrast, sharpness, and variability). The variation of the very important contrast metric in adverse weather conditions is described. Design trade-off considerations for Synthetic Vision MMW sensors are presented.
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
NASA Technical Reports Server (NTRS)
Pines, S.; Hueschen, R. M.
1978-01-01
This paper describes the navigation and guidance system developed for the TCV B-737, a Langley Field NASA research aircraft, and presents the results of an evaluation during final approach, landing, rollout and turnoff obtained through a nonlinear digital simulation. A Kalman filter (implemented in square root form) and a third order complementary filter were developed and compared for navigation. The Microwave Landing Systems (MLS) is used for all phases of the flight for navigation and guidance. In addition, for rollout and turnoff, a three coil sensor which detects the magnetic field induced by a buried wire in the runway (magnetic leader cable) is used. The outputs of the sensor are processed into measurements of position and heading deviation from the wire. The results show the concept to be both feasible and practical for commercial type aircraft terminal area control.
Kotze, Ben; Jordaan, Gerrit
2014-08-25
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.
Kotze, Ben; Jordaan, Gerrit
2014-01-01
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548
A navigation system for the visually impaired an intelligent white cane.
Fukasawa, A Jin; Magatani, Kazusihge
2012-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane, this sensor senses a color of navigation line and the system informs the visually impaired that he/she is walking along the navigation line by vibration. This color recognition system is controlled by a one-chip microprocessor. RFID tags and a receiver for these tags are used in the map information system. RFID tags are set on the colored navigation line. An antenna for RFID tags and a tag receiver are also installed on a white cane. The receiver receives the area information as a tag-number and notifies map information to the user by mp3 formatted pre-recorded voice. And now, we developed the direction identification technique. Using this technique, we can detect a user's walking direction. A triaxiality acceleration sensor is used in this system. Three normal subjects who were blindfolded with an eye mask were tested with our developed navigation system. All of them were able to walk along the navigation line perfectly. We think that the performance of the system is good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-05-04
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Recent progress in millimeter-wave sensor system capabilities for enhanced (synthetic) vision
NASA Astrophysics Data System (ADS)
Hellemann, Karlheinz; Zachai, Reinhard
1999-07-01
Weather- and daylight independent operation of modern traffic systems is strongly required for an optimized and economic availability. Mainly helicopters, small aircraft and military transport aircraft operating frequently close to the ground have a need for effective and cost-effective Enhanced Vision sensors. The technical progress in sensor technology and processing speed offer today the possibility for new concepts to be realized. Derived from this background the paper reports on the improvements which are under development within the HiVision program at DaimlerChrysler Aerospace. A sensor demonstrator based on FMCW radar technology with high information update-rate and operating in the mm-wave band, has been up-graded to improve performance and fitted to fly on an experimental base. The results achieved so far demonstrate the capability to produce a weather independent enhanced vision. In addition the demonstrator has been tested on board a high- speed ferry at the Baltic sea, because fast vessels have a similar need for weather-independent operation and anti- collision measures. In the future one sensor type may serve both 'worlds' and help ease and save traffic. The described demonstrator fills up the technology gap between optical sensors (Infrared) and standard pulse radars with its specific features such as high speed scanning and weather penetration with the additional benefit of cost-effectiveness.
Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian
2017-01-01
The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
[Navigated drilling for femoral head necrosis. Experimental and clinical results].
Beckmann, J; Tingart, M; Perlick, L; Lüring, C; Grifka, J; Anders, S
2007-05-01
In the early stages of osteonecrosis of the femoral head, core decompression by exact drilling into the ischemic areas can reduce pain and achieve reperfusion. Using computer aided surgery, the precision of the drilling can be improved while simultaneously lowering radiation exposure time for both staff and patients. We describe the experimental and clinical results of drilling under the guidance of the fluoroscopically-based VectorVision navigation system (BrainLAB, Munich, Germany). A total of 70 sawbones were prepared mimicking an osteonecrosis of the femoral head. In two experimental models, bone only and obesity, as well as in a clinical setting involving ten patients with osteonecrosis of the femoral head, the precision and the duration of radiation exposure were compared between the VectorVision system and conventional drilling. No target was missed. For both models, there was a statistically significant difference in terms of the precision, the number of drilling corrections as well as the radiation exposure time. The average distance to the desired midpoint of the lesion of both models was 0.48 mm for navigated drilling and 1.06 mm for conventional drilling, the average drilling corrections were 0.175 and 2.1, and the radiation exposure time less than 1 s and 3.6 s, respectively. In the clinical setting, the reduction of radiation exposure (below 1 s for navigation compared to 56 s for the conventional technique) as well as of drilling corrections (0.2 compared to 3.4) was also significant. Computer guided drilling using the fluoroscopically based VectorVision navigation system shows a clearly improved precision with a enormous simultaneous reduction in radiation exposure. It is therefore recommended for clinical routine.
Precision of computer-assisted core decompression drilling of the femoral head.
Beckmann, J; Goetz, J; Baethis, H; Kalteis, T; Grifka, J; Perlick, L
2006-08-01
Osteonecrosis of the femoral head is a local destructive disease with progression into devastating stages. Left untreated it mostly leads to severe secondary osteoarthrosis and early endoprosthetic joint replacement. Core decompression by exact drilling into the ischemic areas can be performed in early stages according to Ficat or ARCO. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. Twenty sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany) and 10 sawbones by fluoroscopic control only. No gypsum sphere was missed. There was a statistically significant difference regarding the three-dimensional deviation (Euclidian norm) as well as maximum deviation in x-, y- or z-direction (maximum norm) to the desired mid-point of the lesion, with a mean of 0.51 and 0.4 mm in the navigated group and 1.1 and 0.88 mm in the control group, respectively. Furthermore, significant difference was found in the number of drilling corrections as well as the radiation time needed: no second drilling or correction of drilling direction was necessary in the navigated group compared to 1.4 in the control group. The radiation time needed was less than 1 s compared to 3.1 s, respectively. The fluoroscopy-based VectorVision navigation system shows a high feasibility of computer-guided drilling with a clear reduction of radiation exposure time and can therefore be integrated into clinical routine. The additional time needed is acceptable regarding the simultaneous reduction of radiation time.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Integration of Kinect and Low-Cost Gnss for Outdoor Navigation
NASA Astrophysics Data System (ADS)
Pagliaria, D.; Pinto, L.; Reguzzoni, M.; Rossi, L.
2016-06-01
Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.
NASA Technical Reports Server (NTRS)
Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.
2007-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL) using the FRL's 6-DOF gantry system, called the Dynamic Overhead Target System (DOTS). The target vehicle for "docking" in the laboratory was a mockup that was representative of the proposed CEV docking system, with added retroreflectors for the AVGS. The multi-sensor test configuration used 35 open-loop test trajectories covering three major objectives: (1) sensor characterization trajectories designed to test a wide range of performance parameters; (2) CEV-specific trajectories designed to test performance during CEV-like approach and departure profiles; and (3) sensor characterization tests designed for evaluating sensor performance under more extreme conditions as might be induced during a spacecraft failure or during contingency situations. This paper describes the test development, test facility, test preparations, test execution, and test results of the multi-sensor series of trajectories.
Achieving Real-Time Tracking Mobile Wireless Sensors Using SE-KFA
NASA Astrophysics Data System (ADS)
Kadhim Hoomod, Haider, Dr.; Al-Chalabi, Sadeem Marouf M.
2018-05-01
Nowadays, Real-Time Achievement is very important in different fields, like: Auto transport control, some medical applications, celestial body tracking, controlling agent movements, detections and monitoring, etc. This can be tested by different kinds of detection devices, which named "sensors" as such as: infrared sensors, ultrasonic sensor, radars in general, laser light sensor, and so like. Ultrasonic Sensor is the most fundamental one and it has great impact and challenges comparing with others especially when navigating (as an agent). In this paper, concerning to the ultrasonic sensor, sensor(s) detecting and delimitation by themselves then navigate inside a limited area to estimating Real-Time using Speed Equation with Kalman Filter Algorithm as an intelligent estimation algorithm. Then trying to calculate the error comparing to the factual rate of tracking. This paper used Ultrasonic Sensor HC-SR04 with Arduino-UNO as Microcontroller.
Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie
2002-01-01
Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.
Precision Landing and Hazard Avoidance Doman
NASA Technical Reports Server (NTRS)
Robertson, Edward A.; Carson, John M., III
2016-01-01
The Precision Landing and Hazard Avoidance (PL&HA) domain addresses the development, integration, testing, and spaceflight infusion of sensing, processing, and GN&C functions critical to the success and safety of future human and robotic exploration missions. PL&HA sensors also have applications to other mission events, such as rendezvous and docking. Autonomous PL&HA builds upon the core GN&C capabilities developed to enable soft, controlled landings on the Moon, Mars, and other solar system bodies. Through the addition of a Terrain Relative Navigation (TRN) function, precision landing within tens of meters of a map-based target is possible. The addition of a 3-D terrain mapping lidar sensor improves the probability of a safe landing via autonomous, real-time Hazard Detection and Avoidance (HDA). PL&HA significantly improves the probability of mission success and enhances access to sites of scientific interest located in challenging terrain. PL&HA can also utilize external navigation aids, such as navigation satellites and surface beacons. Advanced Lidar Sensors High precision ranging, velocimetry, and 3-D terrain mapping Terrain Relative Navigation (TRN) TRN compares onboard reconnaissance data with real-time terrain imaging data to update the S/C position estimate Hazard Detection and Avoidance (HDA) Generates a high-resolution, 3-D terrain map in real-time during the approach trajectory to identify safe landing targets Inertial Navigation During Terminal Descent High precision surface relative sensors enable accurate inertial navigation during terminal descent and a tightly controlled touchdown within meters of the selected safe landing target.
Shape Perception and Navigation in Blind Adults
Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara
2017-01-01
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226
Hybrid optical acoustic seafloor mapping
NASA Astrophysics Data System (ADS)
Inglis, Gabrielle
The oceanographic research and industrial communities have a persistent demand for detailed three dimensional sea floor maps which convey both shape and texture. Such data products are used for archeology, geology, ship inspection, biology, and habitat classification. There are a variety of sensing modalities and processing techniques available to produce these maps and each have their own potential benefits and related challenges. Multibeam sonar and stereo vision are such two sensors with complementary strengths making them ideally suited for data fusion. Data fusion approaches however, have seen only limited application to underwater mapping and there are no established methods for creating hybrid, 3D reconstructions from two underwater sensing modalities. This thesis develops a processing pipeline to synthesize hybrid maps from multi-modal survey data. It is helpful to think of this processing pipeline as having two distinct phases: Navigation Refinement and Map Construction. This thesis extends existing work in underwater navigation refinement by incorporating methods which increase measurement consistency between both multibeam and camera. The result is a self consistent 3D point cloud comprised of camera and multibeam measurements. In map construction phase, a subset of the multi-modal point cloud retaining the best characteristics of each sensor is selected to be part of the final map. To quantify the desired traits of a map several characteristics of a useful map are distilled into specific criteria. The different ways that hybrid maps can address these criteria provides justification for producing them as an alternative to current methodologies. The processing pipeline implements multi-modal data fusion and outlier rejection with emphasis on different aspects of map fidelity. The resulting point cloud is evaluated in terms of how well it addresses the map criteria. The final hybrid maps retain the strengths of both sensors and show significant improvement over the single modality maps and naively assembled multi-modal maps.
Bio-inspired vision based robot control using featureless estimations of time-to-contact.
Zhang, Haijie; Zhao, Jianguo
2017-01-31
Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.
User Needs and Advances in Space Wireless Sensing and Communications
NASA Technical Reports Server (NTRS)
Kegege, Obadiah
2017-01-01
Decades of space exploration and technology trends for future missions show the need for new approaches in space/planetary sensor networks, observatories, internetworking, and communications/data delivery to Earth. The User Needs to be discussed in this talk includes interviews with several scientists and reviews of mission concepts for the next generation of sensors, observatories, and planetary surface missions. These observatories, sensors are envisioned to operate in extreme environments, with advanced autonomy, whereby sometimes communication to Earth is intermittent and delayed. These sensor nodes require software defined networking capabilities in order to learn and adapt to the environment, collect science data, internetwork, and communicate. Also, some user cases require the level of intelligence to manage network functions (either as a host), mobility, security, and interface data to the physical radio/optical layer. For instance, on a planetary surface, autonomous sensor nodes would create their own ad-hoc network, with some nodes handling communication capabilities between the wireless sensor networks and orbiting relay satellites. A section of this talk will cover the advances in space communication and internetworking to support future space missions. NASA's Space Communications and Navigation (SCaN) program continues to evolve with the development of optical communication, a new vision of the integrated network architecture with more capabilities, and the adoption of CCSDS space internetworking protocols. Advances in wireless communications hardware and electronics have enabled software defined networking (DVB-S2, VCM, ACM, DTN, Ad hoc, etc.) protocols for improved wireless communication and network management. Developing technologies to fulfil these user needs for wireless communications and adoption of standardized communication/internetworking protocols will be a huge benefit to future planetary missions, space observatories, and manned missions to other planets.
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
NASA Technical Reports Server (NTRS)
Pastor, P. Rick; Bishop, Robert H.; Striepe, Scott A.
2000-01-01
A first order simulation analysis of the navigation accuracy expected from various Navigation Quick-Look data sets is performed. Here quick-look navigation data are observations obtained by hypothetical telemetried data transmitted on the fly during a Mars probe's atmospheric entry. In this simulation study, navigation data consists of 3-axis accelerometer sensor and attitude information data. Three entry vehicle guidance types are studied: I. a Maneuvering entry vehicle (as with Mars 01 guidance where angle of attack and bank angle are controlled); II. Zero angle-of-attack controlled entry vehicle (as with Mars 98); and III. Ballistic, or spin stabilized entry vehicle (as with Mars Pathfinder);. For each type, sensitivity to progressively under sampled navigation data and inclusion of sensor errors are characterized. Attempts to mitigate the reconstructed trajectory errors, including smoothing, interpolation and changing integrator characteristics are also studied.
Analysis of a novel device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng; Wu, Wei
2017-02-01
The combination of the strap-down inertial navigation system(SINS) and the celestial navigation system(CNS) is one of the popular measures to constitute the integrated navigation system. A star sensor(SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS is motion-blurred under dynamic conditions, the attitude-correlated frames(ACF) approach is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible.
Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu
2018-01-01
The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467
Benchmarking neuromorphic vision: lessons learnt from computer vision
Tan, Cheston; Lallee, Stephane; Orchard, Garrick
2015-01-01
Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120
Performance analysis of device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng
2016-10-01
The Strap-Down Inertial Navigation System (SINS) is a widely used navigation system. The combination of SINS and the Celestial Navigation System (CNS) is one of the popular measures to constitute the integrated navigation system. A Star Sensor (SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS under dynamic conditions is motion-blurred, the Attitude Correlated Frames (ACF) is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible in theory.
Panoramic 3d Vision on the ExoMars Rover
NASA Astrophysics Data System (ADS)
Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.
The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w.r.t. fields of view, ranging capability (distance measurement capability), data rate, necessity of calibration targets, hardware & data interfaces to other subsystems (e.g. navigation) as well as accuracy impacts of sensor design and compression ratio. • Geometric Calibration: The geometric properties of the individual cameras including various spectral filters, their mutual relations and the dynamic geometrical relation between rover frame and cameras - with the mast in between - are precisely described by a calibration process. During surface operations these relations will be continuously checked and updated by photogrammetric means, environmental influences such as temperature, pressure and the Mars gravity will be taken into account. • Surface Mapping: Stereo imaging using the WAC stereo pair is used for the 3d reconstruction of the rover vicinity to identify, locate and characterize potentially interesting spots (3-10 for an experimental cycle to be performed within approx. 10-30 sols). The HRC is used for high resolution imagery of these regions of interest to be overlaid on the 3d reconstruction and potentially refined by shape-from-shading techniques. A quick processing result is crucial for time critical operations planning, therefore emphasis is laid on the automatic behaviour and intrinsic error detection mechanisms. The mapping results will be continuously fused, updated and synchronized with the map used by the navigation system. The surface representation needs to take into account the different resolutions of HRC and WAC as well as uncommon or even unexpected image acquisition modes such as long range, wide baseline stereo from different rover positions or escape strategies in the case of loss of one of the stereo camera heads. • Panorama Mosaicking: The production of a high resolution stereoscopic panorama nowadays is state-of-art in computer vision. However, certain 2 challenges such as the need for access to accurate spherical coordinates, maintenance of radiometric & spectral response in various spectral bands, fusion between HRC and WAC, super resolution, and again the requirement of quick yet robust processing will add some complexity to the ground processing system. • Visualization for Operations Planning: Efficient operations planning is directly related to an ergonomic and well performing visualization. It is intended to adapt existing tools to an integrated visualization solution for the purpose of scientific site characterization, view planning and reachability mapping/instrument placement of pointing sensors (including the panoramic imaging system itself), and selection of regions of interest. The main interfaces between the individual components as well as the first version of a user requirement document are currently under definition. Beside the support for sensor layout and calibration the 3d vision system will consist of 2-3 main modules to be used during ground processing & utilization of the ExoMars Rover panoramic imaging system. 3
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1991-01-01
The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.
NASA Astrophysics Data System (ADS)
Belbachir, A. N.; Hofstätter, M.; Litzenberger, M.; Schön, P.
2009-10-01
A synchronous communication interface for neuromorphic temporal contrast vision sensors is described and evaluated in this paper. This interface has been designed for ultra high-speed synchronous arbitration of a temporal contrast image sensors pixels' data. Enabling high-precision timestamping, this system demonstrates its uniqueness for handling peak data rates and preserving the main advantage of the neuromorphic electronic systems, that is high and accurate temporal resolution. Based on a synchronous arbitration concept, the timestamping has a resolution of 100 ns. Both synchronous and (state-of-the-art) asynchronous arbiters have been implemented in a neuromorphic dual-line vision sensor chip in a standard 0.35 µm CMOS process. The performance analysis of both arbiters and the advantages of the synchronous arbitration over asynchronous arbitration in capturing high-speed objects are discussed in detail.
Enabling Autonomous Navigation for Affordable Scooters.
Liu, Kaikai; Mulky, Rajathswaroop
2018-06-05
Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
Line width determination using a biomimetic fly eye vision system.
Benson, John B; Wright, Cameron H G; Barrett, Steven F
2007-01-01
Developing a new vision system based on the vision of the common house fly, Musca domestica, has created many interesting design challenges. One of those problems is line width determination, which is the topic of this paper. It has been discovered that line width can be determined with a single sensor as long as either the sensor, or the object in question, has a constant, known velocity. This is an important first step for determining the width of any arbitrary object, with unknown velocity.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
INS integrated motion analysis for autonomous vehicle navigation
NASA Technical Reports Server (NTRS)
Roberts, Barry; Bazakos, Mike
1991-01-01
The use of inertial navigation system (INS) measurements to enhance the quality and robustness of motion analysis techniques used for obstacle detection is discussed with particular reference to autonomous vehicle navigation. The approach to obstacle detection used here employs motion analysis of imagery generated by a passive sensor. Motion analysis of imagery obtained during vehicle travel is used to generate range measurements to points within the field of view of the sensor, which can then be used to provide obstacle detection. Results obtained with an INS integrated motion analysis approach are reviewed.
Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion
NASA Astrophysics Data System (ADS)
Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger
2007-12-01
Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.
Improving Real World Performance of Vision Aided Navigation in a Flight Environment
2016-09-15
Introduction . . . . . . . 63 4.2 Wide Area Search Extent . . . . . . . . . . . . . . . . . 64 4.3 Large-Scale Image Navigation Histogram Filter ...65 4.3.1 Location Model . . . . . . . . . . . . . . . . . . 66 4.3.2 Measurement Model . . . . . . . . . . . . . . . 66 4.3.3 Histogram Filter ...Iteration of Histogram Filter . . . . . . . . . . . 70 4.4 Implementation and Flight Test Campaign . . . . . . . . 71 4.4.1 Software Implementation
Human Exploration and Avionic Technology Challenges
NASA Technical Reports Server (NTRS)
Benjamin, Andrew L.
2005-01-01
For this workshop, I will identify critical avionic gaps, enabling technologies, high-pay off investment opportunities, promising capabilities, and space applications for human lunar and Mars exploration. Key technology disciplines encompass fault tolerance, miniaturized instrumentation sensors, MEMS-based guidance, navigation, and controls, surface communication networks, and rendezvous and docking. Furthermore, I will share bottom-up strategic planning relevant to manned mission -driven needs. Blending research expertise, facilities, and personnel with internal NASA is vital to stimulating collaborative technology solutions that achieve NASA grand vision. Retaining JSC expertise in unique and critical areas is paramount to our long-term success. Civil servants will maintain key roles in setting technology agenda, ensuring quality results, and integrating technologies into avionic systems and manned missions. Finally, I will present to NASA, academia, and the aerospace community some on -going and future advanced avionic technology programs and activities that are relevant to our mission goals and objectives.
Data fusion for a vision-aided radiological detection system: Calibration algorithm performance
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas
2018-05-01
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
The Role of X-Rays in Future Space Navigation and Communication
NASA Technical Reports Server (NTRS)
Winternitz, Luke M. B.; Gendreau, Keith C.; Hasouneh, Monther A.; Mitchell, Jason W.; Fong, Wai H.; Lee, Wing-Tsz; Gavriil, Fotis; Arzoumanian, Zaven
2013-01-01
In the near future, applications using X-rays will enable autonomous navigation and time distribution throughout the solar system, high capacity and low-power space data links, highly accurate attitude sensing, and extremely high-precision formation flying capabilities. Each of these applications alone has the potential to revolutionize mission capabilities, particularly beyond Earth orbit. This paper will outline the NASA Goddard Space Flight Center vision and efforts toward realizing the full potential of X-ray navigation and communications.
Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.
Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha
2018-06-01
Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.
77 FR 42704 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Vision Sensors, 12 AN/APG-78 Fire Control Radars (FCR) with Radar Electronics Unit (LONGBOW component... Target Acquisition and Designation Sight, 27 AN/AAR-11 Modernized Pilot Night Vision Sensors, 12 AN/APG... enhance the protection of key oil and gas infrastructure and platforms which are vital to U.S. and western...
76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...
It's not black or white—on the range of vision and echolocation in echolocating bats
Boonman, Arjan; Bar-On, Yinon; Cvikel, Noam; Yovel, Yossi
2013-01-01
Around 1000 species of bats in the world use echolocation to navigate, orient, and detect insect prey. Many of these bats emerge from their roost at dusk and start foraging when there is still light available. It is however unclear in what way and to which extent navigation, or even prey detection in these bats is aided by vision. Here we compare the echolocation and visual detection ranges of two such species of bats which rely on different foraging strategies (Rhinopoma microphyllum and Pipistrellus kuhlii). We find that echolocation is better than vision for detecting small insects even in intermediate light levels (1–10 lux), while vision is advantageous for monitoring far-away landscape elements in both species. We thus hypothesize that, bats constantly integrate information acquired by the two sensory modalities. We suggest that during evolution, echolocation was refined to detect increasingly small targets in conjunction with using vision. To do so, the ability to hear ultrasonic sound is a prerequisite which was readily available in small mammals, but absent in many other animal groups. The ability to exploit ultrasound to detect very small targets, such as insects, has opened up a large nocturnal niche to bats and may have spurred diversification in both echolocation and foraging tactics. PMID:24065924
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Navigation Algorithms for the SeaWiFS Mission
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)
2002-01-01
The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking. Although further improvements are certainly possible, future evolution of the algorithms is expected to be limited to refinements of the methods presented here, and no substantial changes are anticipated.
Activation of the Hippocampal Complex during Tactile Maze Solving in Congenitally Blind Subjects
ERIC Educational Resources Information Center
Gagnon, Lea; Schneider, Fabien C.; Siebner, Hartwig R.; Paulson, Olaf B.; Kupers, Ron; Ptito, Maurice
2012-01-01
Despite their lack of vision, congenitally blind subjects are able to build and manipulate cognitive maps for spatial navigation. It is assumed that they thereby rely more heavily on echolocation, proprioceptive signals and environmental cues such as ambient temperature and audition to compensate for their lack of vision. Little is known, however,…
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-01-01
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707
Coordinating sensing and local navigation
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1991-01-01
Based on Navigation Templates (or NaTs), this work presents a new paradigm for local navigation which addresses the noisy and uncertain nature of sensor data. Rather than creating a new navigation plan each time the robot's perception of the world changes, the technique incorporates perceptual changes directly into the existing navigation plan. In this way, the robot's navigation plan is quickly and continuously modified, resulting in actions that remain coordinated with its changing perception of the world.
Using arm and hand gestures to command robots during stealth operations
NASA Astrophysics Data System (ADS)
Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi
2012-06-01
Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.
Using Arm and Hand Gestures to Command Robots during Stealth Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi
2012-01-01
Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.
Integrating Terrain Maps Into a Reactive Navigation Strategy
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Werger, Barry; Seraji, Homayoun
2006-01-01
An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.
Strapdown cost trend study and forecast
NASA Technical Reports Server (NTRS)
Eberlein, A. J.; Savage, P. G.
1975-01-01
The potential cost advantages offered by advanced strapdown inertial technology in future commercial short-haul aircraft are summarized. The initial procurement cost and six year cost-of-ownership, which includes spares and direct maintenance cost were calculated for kinematic and inertial navigation systems such that traditional and strapdown mechanization costs could be compared. Cost results for the inertial navigation systems showed that initial costs and the cost of ownership for traditional triple redundant gimbaled inertial navigators are three times the cost of the equivalent skewed redundant strapdown inertial navigator. The net cost advantage for the strapdown kinematic system is directly attributable to the reduction in sensor count for strapdown. The strapdown kinematic system has the added advantage of providing a fail-operational inertial navigation capability for no additional cost due to the use of inertial grade sensors and attitude reference computers.
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
NASA Astrophysics Data System (ADS)
Welch, Sharon S.
Topics discussed in this volume include aircraft guidance and navigation, optics for visual guidance of aircraft, spacecraft and missile guidance and navigation, lidar and ladar systems, microdevices, gyroscopes, cockpit displays, and automotive displays. Papers are presented on optical processing for range and attitude determination, aircraft collision avoidance using a statistical decision theory, a scanning laser aircraft surveillance system for carrier flight operations, star sensor simulation for astroinertial guidance and navigation, autonomous millimeter-wave radar guidance systems, and a 1.32-micron long-range solid state imaging ladar. Attention is also given to a microfabricated magnetometer using Young's modulus changes in magnetoelastic materials, an integrated microgyroscope, a pulsed diode ring laser gyroscope, self-scanned polysilicon active-matrix liquid-crystal displays, the history and development of coated contrast enhancement filters for cockpit displays, and the effect of the display configuration on the attentional sampling performance. (For individual items see A93-28152 to A93-28176, A93-28178 to A93-28180)
Yoo, Jeong-Ki; Kim, Jong-Hwan
2012-02-01
When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
A Leo Satellite Navigation Algorithm Based on GPS and Magnetometer Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
2001-01-01
The Global Positioning System (GPS) has become a standard method for low cost onboard satellite orbit determination. The use of a GPS receiver as an attitude and rate sensor has also been developed in the recent past. Additionally, focus has been given to attitude and orbit estimation using the magnetometer, a low cost, reliable sensor. Combining measurements from both GPS and a magnetometer can provide a robust navigation system that takes advantage of the estimation qualities of both measurements. Ultimately, a low cost, accurate navigation system can result, potentially eliminating the need for more costly sensors, including gyroscopes. This work presents the development of a technique to eliminate numerical differentiation of the GPS phase measurements and also compares the use of one versus two GPS satellites.
FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision
Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069
FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.
Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Georeferencing in Gnss-Challenged Environment: Integrating Uwb and Imu Technologies
NASA Astrophysics Data System (ADS)
Toth, C. K.; Koppanyi, Z.; Navratil, V.; Grejner-Brzezinska, D.
2017-05-01
Acquiring geospatial data in GNSS compromised environments remains a problem in mapping and positioning in general. Urban canyons, heavily vegetated areas, indoor environments represent different levels of GNSS signal availability from weak to no signal reception. Even outdoors, with multiple GNSS systems, with an ever-increasing number of satellites, there are many situations with limited or no access to GNSS signals. Independent navigation sensors, such as IMU can provide high-data rate information but their initial accuracy degrades quickly, as the measurement data drift over time unless positioning fixes are provided from another source. At The Ohio State University's Satellite Positioning and Inertial Navigation (SPIN) Laboratory, as one feasible solution, Ultra- Wideband (UWB) radio units are used to aid positioning and navigating in GNSS compromised environments, including indoor and outdoor scenarios. Here we report about experiences obtained with georeferencing a pushcart based sensor system under canopied areas. The positioning system is based on UWB and IMU sensor integration, and provides sensor platform orientation for an electromagnetic inference (EMI) sensor. Performance evaluation results are provided for various test scenarios, confirming acceptable results for applications where high accuracy is not required.
NASA Astrophysics Data System (ADS)
Holasek, Rick; Nakanishi, Keith; Ziph-Schatzberg, Leah; Santman, Jeff; Woodman, Patrick; Zacaroli, Richard; Wiggins, Richard
2017-04-01
Hyperspectral imaging (HSI) has been used for over two decades in laboratory research, academic, environmental and defense applications. In more recent time, HSI has started to be adopted for commercial applications in machine vision, conservation, resource exploration, and precision agriculture, to name just a few of the economically viable uses for the technology. Corning Incorporated (Corning) has been developing and manufacturing HSI sensors, sensor systems, and sensor optical engines, as well as HSI sensor components such as gratings and slits for over a decade and a half. This depth of experience and technological breadth has allowed Corning to design and develop unique HSI spectrometers with an unprecedented combination of high performance, low cost and low Size, Weight, and Power (SWaP). These sensors and sensor systems are offered with wavelength coverage ranges from the visible to the Long Wave Infrared (LWIR). The extremely low SWaP of Corning's HSI sensors and sensor systems enables their deployment using limited payload platforms such as small unmanned aerial vehicles (UAVs). This paper discusses use of the Corning patented monolithic design Offner spectrometer, the microHSI™, to build a highly compact 400-1000 nm HSI sensor in combination with a small Inertial Navigation System (INS) and micro-computer to make a complete turn-key airborne remote sensing payload. This Selectable Hyperspectral Airborne Remote sensing Kit (SHARK) has industry leading SWaP (1.5 lbs) at a disruptively low price due, in large part, to Corning's ability to manufacture the monolithic spectrometer out of polymers (i.e. plastic) and therefore reduce manufacturing costs considerably. The other factor in lowering costs is Corning's well established in house manufacturing capability in optical components and sensors that further enable cost-effective fabrication. The competitive SWaP and low cost of the microHSI™ sensor is approaching, and in some cases less than the price point of Multi Spectral Imaging (MSI) sensors. Specific designs of the Corning microHSI™ SHARK visNIR turn-key system are presented along with salient performance characteristics. Initial focus market areas include precision agriculture and historic and recent microHSI™ SHARK prototype test results are presented.
Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip
2015-07-01
Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less
A stereo-vision hazard-detection algorithm to increase planetary lander autonomy
NASA Astrophysics Data System (ADS)
Woicke, Svenja; Mooij, Erwin
2016-05-01
For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.
Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption
NASA Astrophysics Data System (ADS)
Mathavaraj, S.; Pandiyan, R.; Padhi, R.
2017-12-01
A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Flight Testing ALHAT Precision Landing Technologies Integrated Onboard the Morpheus Rocket Vehicle
NASA Technical Reports Server (NTRS)
Carson, John M. III; Robertson, Edward A.; Trawny, Nikolas; Amzajerdian, Farzin
2015-01-01
A suite of prototype sensors, software, and avionics developed within the NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project were terrestrially demonstrated onboard the NASA Morpheus rocket-propelled Vertical Testbed (VTB) in 2014. The sensors included a LIDAR-based Hazard Detection System (HDS), a Navigation Doppler LIDAR (NDL) velocimeter, and a long-range Laser Altimeter (LAlt) that enable autonomous and safe precision landing of robotic or human vehicles on solid solar system bodies under varying terrain lighting conditions. The flight test campaign with the Morpheus vehicle involved a detailed integration and functional verification process, followed by tether testing and six successful free flights, including one night flight. The ALHAT sensor measurements were integrated into a common navigation solution through a specialized ALHAT Navigation filter that was employed in closed-loop flight testing within the Morpheus Guidance, Navigation and Control (GN&C) subsystem. Flight testing on Morpheus utilized ALHAT for safe landing site identification and ranking, followed by precise surface-relative navigation to the selected landing site. The successful autonomous, closed-loop flight demonstrations of the prototype ALHAT system have laid the foundation for the infusion of safe, precision landing capabilities into future planetary exploration missions.
Indoor Navigation using Direction Sensor and Beacons
NASA Technical Reports Server (NTRS)
Shields, Joel; Jeganathan, Muthu
2004-01-01
A system for indoor navigation of a mobile robot includes (1) modulated infrared beacons at known positions on the walls and ceiling of a room and (2) a cameralike sensor, comprising a wide-angle lens with a position-sensitive photodetector at the focal plane, mounted in a known position and orientation on the robot. The system also includes a computer running special-purpose software that processes the sensor readings to obtain the position and orientation of the robot in all six degrees of freedom in a coordinate system embedded in the room.
Relative Navigation of Formation Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, Russell; Gramling, Cheryl; Bauer, Frank (Technical Monitor)
2002-01-01
The Guidance, Navigation, and Control Center (GNCC) at Goddard Space Flight Center (GSFC) has successfully developed high-accuracy autonomous satellite navigation systems using the National Aeronautics and Space Administration's (NASA's) space and ground communications systems and the Global Positioning System (GPS). In addition, an autonomous navigation system that uses celestial object sensor measurements is currently under development and has been successfully tested using real Sun and Earth horizon measurements.The GNCC has developed advanced spacecraft systems that provide autonomous navigation and control of formation flyers in near-Earth, high-Earth, and libration point orbits. To support this effort, the GNCC is assessing the relative navigation accuracy achievable for proposed formations using GPS, intersatellite crosslink, ground-to-satellite Doppler, and celestial object sensor measurements. This paper evaluates the performance of these relative navigation approaches for three proposed missions with two or more vehicles maintaining relatively tight formations. High-fidelity simulations were performed to quantify the absolute and relative navigation accuracy as a function of navigation algorithm and measurement type. Realistically-simulated measurements were processed using the extended Kalman filter implemented in the GPS Enhanced Inboard Navigation System (GEONS) flight software developed by GSFC GNCC. Solutions obtained by simultaneously estimating all satellites in the formation were compared with the results obtained using a simpler approach based on differencing independently estimated state vectors.
GPS compound eye attitude and navigation sensor and method
NASA Technical Reports Server (NTRS)
Quinn, David A. (Inventor)
2003-01-01
The present invention is a GPS system for navigation and attitude determination, comprising a sensor array including a convex hemispherical mounting structure having a plurality of mounting surfaces, and a plurality of antennas mounted to the mounting surfaces for receiving signals from space vehicles of a GPS constellation. The present invention also includes a receiver for collecting the signals and making navigation and attitude determinations. In an alternate embodiment the present invention may include two opposing convex hemispherical mounting structures, each of the mounting structures having a plurality of mounting surfaces, and a plurality of antennas mounted to the mounting surfaces.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
Three spectrally distinct photoreceptors in diurnal and nocturnal Australian ants.
Ogawa, Yuri; Falkowski, Marcin; Narendra, Ajay; Zeil, Jochen; Hemmi, Jan M
2015-06-07
Ants are thought to be special among Hymenopterans in having only dichromatic colour vision based on two spectrally distinct photoreceptors. Many ants are highly visual animals, however, and use vision extensively for navigation. We show here that two congeneric day- and night-active Australian ants have three spectrally distinct photoreceptor types, potentially supporting trichromatic colour vision. Electroretinogram recordings show the presence of three spectral sensitivities with peaks (λmax) at 370, 450 and 550 nm in the night-active Myrmecia vindex and peaks at 370, 470 and 510 nm in the day-active Myrmecia croslandi. Intracellular electrophysiology on individual photoreceptors confirmed that the night-active M. vindex has three spectral sensitivities with peaks (λmax) at 370, 430 and 550 nm. A large number of the intracellular recordings in the night-active M. vindex show unusually broad-band spectral sensitivities, suggesting that photoreceptors may be coupled. Spectral measurements at different temporal frequencies revealed that the ultraviolet receptors are comparatively slow. We discuss the adaptive significance and the probability of trichromacy in Myrmecia ants in the context of dim light vision and visual navigation. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Hohtola, Esa
2016-01-01
Birds utilize several distinct sensory systems in a flexible manner in their navigation. When navigating with the help of landmarks, location of the sun and stars, or polarization image of the dome of the sky, they resort to vision. The significance of olfaction in long-range navigation has been under debate, even though its significance in local orientation is well documented. The hearing in birds extends to the infrasound region. It has been assumed that they are able to hear the infrasounds generated in the mountains and seaside and navigate by using them. Of the senses of birds, the most exotic one is the ability to sense magnetic fields of the earth.
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
Luo, Xiongbiao; Jayarathne, Uditha L; McLeod, A Jonathan; Mori, Kensaku
2014-01-01
Endoscopic navigation generally integrates different modalities of sensory information in order to continuously locate an endoscope relative to suspicious tissues in the body during interventions. Current electromagnetic tracking techniques for endoscopic navigation have limited accuracy due to tissue deformation and magnetic field distortion. To avoid these limitations and improve the endoscopic localization accuracy, this paper proposes a new endoscopic navigation framework that uses an optical mouse sensor to measure the endoscope movements along its viewing direction. We then enhance the differential evolution algorithm by modifying its mutation operation. Based on the enhanced differential evolution method, these movement measurements and image structural patches in endoscopic videos are fused to accurately determine the endoscope position. An evaluation on a dynamic phantom demonstrated that our method provides a more accurate navigation framework. Compared to state-of-the-art methods, it improved the navigation accuracy from 2.4 to 1.6 mm and reduced the processing time from 2.8 to 0.9 seconds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EISLER, G. RICHARD
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less
Results from a GPS Shuttle Training Aircraft flight test
NASA Technical Reports Server (NTRS)
Saunders, Penny E.; Montez, Moises N.; Robel, Michael C.; Feuerstein, David N.; Aerni, Mike E.; Sangchat, S.; Rater, Lon M.; Cryan, Scott P.; Salazar, Lydia R.; Leach, Mark P.
1991-01-01
A series of Global Positioning System (GPS) flight tests were performed on a National Aeronautics and Space Administration's (NASA's) Shuttle Training Aircraft (STA). The objective of the tests was to evaluate the performance of GPS-based navigation during simulated Shuttle approach and landings for possible replacement of the current Shuttle landing navigation aid, the Microwave Scanning Beam Landing System (MSBLS). In particular, varying levels of sensor data integration would be evaluated to determine the minimum amount of integration required to meet the navigation accuracy requirements for a Shuttle landing. Four flight tests consisting of 8 to 9 simulation runs per flight test were performed at White Sands Space Harbor in April 1991. Three different GPS receivers were tested. The STA inertial navigation, tactical air navigation, and MSBLS sensor data were also recorded during each run. C-band radar aided laser trackers were utilized to provide the STA 'truth' trajectory.
High-efficient Unmanned Aircraft System Operations for Ecosystem Assessment
NASA Astrophysics Data System (ADS)
Xu, H.; Zhang, H.
2016-02-01
Diverse national and international agencies support the idea that incorporating Unmanned Aircraft Systems (UAS) into ecosystem assessment will improve the operations efficiency and accuracy. In this paper, a UAS will be designed to monitor the Gulf of Mexico's coastal area ecosystems intelligently and routinely. UAS onboard sensors will capture information that can be utilized to detect and geo-locate areas affected by invasive grasses. Moreover, practical ecosystem will be better assessed by analyzing the collected information. Compared with human-based/satellite-based surveillance, the proposed strategy is more efficient and accurate, and eliminates limitations and risks associated with human factors. State of the art UAS onboard sensors (e.g. high-resolution electro optical camera, night vision camera, thermal sensor etc.) will be used for monitoring coastal ecosystems. Once detected the potential risk in ecosystem, the onboard GPS data will be used to geo-locate and to store the exact coordinates of the affected area. Moreover, the UAS sensors will be used to observe and to record the daily evolution of coastal ecosystems. Further, benefitting from the data collected by the UAS, an intelligent big data processing scheme will be created to assess the ecosystem evolution effectively. Meanwhile, a cost-efficient intelligent autonomous navigation strategy will be implemented into the UAS, in order to guarantee that the UAS can fly over designated areas, and collect significant data in a safe and effective way. Furthermore, the proposed UAS-based ecosystem surveillance and assessment methodologies can be utilized for natural resources conservation. Flying UAS with multiple state of the art sensors will monitor and report the actual state of high importance natural resources frequently. Using the collected data, the ecosystem conservation strategy can be performed effectively and intelligently.
Space Shuttle Navigation in the GPS Era
NASA Technical Reports Server (NTRS)
Goodman, John L.
2001-01-01
The Space Shuttle navigation architecture was originally designed in the 1970s. A variety of on-board and ground based navigation sensors and computers are used during the ascent, orbit coast, rendezvous, (including proximity operations and docking) and entry flight phases. With the advent of GPS navigation and tightly coupled GPS/INS Units employing strapdown sensors, opportunities to improve and streamline the Shuttle navigation process are being pursued. These improvements can potentially result in increased safety, reliability, and cost savings in maintenance through the replacement of older technologies and elimination of ground support systems (such as Tactical Air Control and Navigation (TACAN), Microwave Landing System (MLS) and ground radar). Selection and missionization of "off the shelf" GPS and GPS/INS units pose a unique challenge since the units in question were not originally designed for the Space Shuttle application. Various options for integrating GPS and GPS/INS units with the existing orbiter avionics system were considered in light of budget constraints, software quality concerns, and schedule limitations. An overview of Shuttle navigation methodology from 1981 to the present is given, along with how GPS and GPS/INS technology will change, or not change, the way Space Shuttle navigation is performed in the 21 5 century.
Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System
García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704
Complete vision-based traffic sign recognition supported by an I2V communication system.
García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.
Square tracking sensor for autonomous helicopter hover stabilization
NASA Astrophysics Data System (ADS)
Oertel, Carl-Henrik
1995-06-01
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.
Navigation in Difficult Environments: Multi-Sensor Fusion Techniques
2010-03-01
Hwang , Introduction to Random Signals and Applied Kalman Filtering, 3rd ed., John Wiley & Sons, Inc., New York, 1997. [17] J. L. Farrell, “GPS/INS...nav solution Navigation outputs Estimation of inertial errors ( Kalman filter) Error estimates Core sensor Incoming signal INS Estimates of signal...the INS drift terms is performed using the mechanism of a complementary Kalman filter. The idea is that a signal parameter can be generally
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass.
Han, Guoliang; Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang
2017-11-14
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 ∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation.
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass
Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang
2017-01-01
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0.15∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. PMID:29135927
2018-01-01
Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267
Schwein, Adeline; Kramer, Ben; Chinnadurai, Ponraj; Walker, Sean; O'Malley, Marcia; Lumsden, Alan; Bismuth, Jean
2017-02-01
One limitation of the use of robotic catheters is the lack of real-time three-dimensional (3D) localization and position updating: they are still navigated based on two-dimensional (2D) X-ray fluoroscopic projection images. Our goal was to evaluate whether incorporating an electromagnetic (EM) sensor on a robotic catheter tip could improve endovascular navigation. Six users were tasked to navigate using a robotic catheter with incorporated EM sensors in an aortic aneurysm phantom. All users cannulated two anatomic targets (left renal artery and posterior "gate") using four visualization modes: (1) standard fluoroscopy mode (control), (2) 2D fluoroscopy mode showing real-time virtual catheter orientation from EM tracking, (3) 3D model of the phantom with anteroposterior and endoluminal view, and (4) 3D model with anteroposterior and lateral view. Standard X-ray fluoroscopy was always available. Cannulation and fluoroscopy times were noted for every mode. 3D positions of the EM tip sensor were recorded at 4 Hz to establish kinematic metrics. The EM sensor-incorporated catheter navigated as expected according to all users. The success rate for cannulation was 100%. For the posterior gate target, mean cannulation times in minutes:seconds were 8:12, 4:19, 4:29, and 3:09, respectively, for modes 1, 2, 3 and 4 (P = .013), and mean fluoroscopy times were 274, 20, 29, and 2 seconds, respectively (P = .001). 3D path lengths, spectral arc length, root mean dimensionless jerk, and number of submovements were significantly improved when EM tracking was used (P < .05), showing higher quality of catheter movement with EM navigation. The EM tracked robotic catheter allowed better real-time 3D orientation, facilitating navigation, with a reduction in cannulation and fluoroscopy times and improvement of motion consistency and efficiency. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
Gyroscope-reduced inertial navigation system for flight vehicle motion estimation
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiao, Lu
2017-01-01
In this paper, a novel configuration of strategically distributed accelerometer sensors with the aid of one gyro to infer a flight vehicle's angular motion is presented. The MEMS accelerometer and gyro sensors are integrated to form a gyroscope-reduced inertial measurement unit (GR-IMU). The motivation for gyro aided accelerometers array is to have direct measurements of angular rates, which is an improvement to the traditional gyroscope-free inertial system that employs only direct measurements of specific force. Some technical issues regarding error calibration in accelerometers and gyro in GR-IMU are put forward. The GR-IMU based inertial navigation system can be used to find a complete attitude solution for flight vehicle motion estimation. Results of numerical simulation are given to illustrate the effectiveness of the proposed configuration. The gyroscope-reduced inertial navigation system based on distributed accelerometer sensors can be developed into a cost effective solution for a fast reaction, MEMS based motion capture system. Future work will include the aid from external navigation references (e.g. GPS) to improve long time mission performance.
Sensor image prediction techniques
NASA Astrophysics Data System (ADS)
Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.
1981-02-01
The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
Autonomous Navigation Using Celestial Objects
NASA Technical Reports Server (NTRS)
Folta, David; Gramling, Cheryl; Leung, Dominic; Belur, Sheela; Long, Anne
1999-01-01
In the twenty-first century, National Aeronautics and Space Administration (NASA) Enterprises envision frequent low-cost missions to explore the solar system, observe the universe, and study our planet. Satellite autonomy is a key technology required to reduce satellite operating costs. The Guidance, Navigation, and Control Center (GNCC) at the Goddard Space Flight Center (GSFC) currently sponsors several initiatives associated with the development of advanced spacecraft systems to provide autonomous navigation and control. Autonomous navigation has the potential both to increase spacecraft navigation system performance and to reduce total mission cost. By eliminating the need for routine ground-based orbit determination and special tracking services, autonomous navigation can streamline spacecraft ground systems. Autonomous navigation products can be included in the science telemetry and forwarded directly to the scientific investigators. In addition, autonomous navigation products are available onboard to enable other autonomous capabilities, such as attitude control, maneuver planning and orbit control, and communications signal acquisition. Autonomous navigation is required to support advanced mission concepts such as satellite formation flying. GNCC has successfully developed high-accuracy autonomous navigation systems for near-Earth spacecraft using NASA's space and ground communications systems and the Global Positioning System (GPS). Recently, GNCC has expanded its autonomous navigation initiative to include satellite orbits that are beyond the regime in which use of GPS is possible. Currently, GNCC is assessing the feasibility of using standard spacecraft attitude sensors and communication components to provide autonomous navigation for missions including: libration point, gravity assist, high-Earth, and interplanetary orbits. The concept being evaluated uses a combination of star, Sun, and Earth sensor measurements along with forward-link Doppler measurements from the command link carrier to autonomously estimate the spacecraft's orbit and reference oscillator's frequency. To support autonomous attitude determination and control and maneuver planning and control, the orbit determination accuracy should be on the order of kilometers in position and centimeters per second in velocity. A less accurate solution (one hundred kilometers in position) could be used for acquisition purposes for command and science downloads. This paper provides performance results for both libration point orbiting and high Earth orbiting satellites as a function of sensor measurement accuracy, measurement types, measurement frequency, initial state errors, and dynamic modeling errors.
NASA Astrophysics Data System (ADS)
Zapf, Marc Patrick H.; Boon, Mei-Ying; Matteucci, Paul B.; Lovell, Nigel H.; Suaning, Gregg J.
2015-06-01
Objective. The prospective efficacy of a future peripheral retinal prosthesis complementing residual vision to raise mobility performance in non-end stage retinitis pigmentosa (RP) was evaluated using simulated prosthetic vision (SPV). Approach. Normally sighted volunteers were fitted with a wide-angle head-mounted display and carried out mobility tasks in photorealistic virtual pedestrian scenarios. Circumvention of low-lying obstacles, path following, and navigating around static and moving pedestrians were performed either with central simulated residual vision of 10° alone or enhanced by assistive SPV in the lower and lateral peripheral visual field (VF). Three layouts of assistive vision corresponding to hypothetical electrode array layouts were compared, emphasizing higher visual acuity, a wider visual angle, or eccentricity-dependent acuity across an intermediate angle. Movement speed, task time, distance walked and collisions with the environment were analysed as performance measures. Main results. Circumvention of low-lying obstacles was improved with all tested configurations of assistive SPV. Higher-acuity assistive vision allowed for greatest improvement in walking speeds—14% above that of plain residual vision, while only wide-angle and eccentricity-dependent vision significantly reduced the number of collisions—both by 21%. Navigating around pedestrians, there were significant reductions in collisions with static pedestrians by 33% and task time by 7.7% with the higher-acuity layout. Following a path, higher-acuity assistive vision increased walking speed by 9%, and decreased collisions with stationary cars by 18%. Significance. The ability of assistive peripheral prosthetic vision to improve mobility performance in persons with constricted VFs has been demonstrated. In a prospective peripheral visual prosthesis, electrode array designs need to be carefully tailored to the scope of tasks in which a device aims to assist. We posit that maximum benefit might come from application alongside existing visual aids, to further raise life quality of persons living through the prolonged early stages of RP.
Design and Development of a Mobile Sensor Based the Blind Assistance Wayfinding System
NASA Astrophysics Data System (ADS)
Barati, F.; Delavar, M. R.
2015-12-01
The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-16
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.
Real-time synthetic vision cockpit display for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.
1999-07-01
Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.
Visual navigation using edge curve matching for pinpoint planetary landing
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei
2018-05-01
Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.
A Kalman Approach to Lunar Surface Navigation using Radiometric and Inertial Measurements
NASA Technical Reports Server (NTRS)
Chelmins, David T.; Welch, Bryan W.; Sands, O. Scott; Nguyen, Binh V.
2009-01-01
Future lunar missions supporting the NASA Vision for Space Exploration will rely on a surface navigation system to determine astronaut position, guide exploration, and return safely to the lunar habitat. In this report, we investigate one potential architecture for surface navigation, using an extended Kalman filter to integrate radiometric and inertial measurements. We present a possible infrastructure to support this technique, and we examine an approach to simulating navigational accuracy based on several different system configurations. The results show that position error can be reduced to 1 m after 5 min of processing, given two satellites, one surface communication terminal, and knowledge of the starting position to within 100 m.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
NASA Technical Reports Server (NTRS)
Ponchak, Denise (Compiler)
2006-01-01
The Integrated Communications, Navigation and Surveillance (ICNS) Technologies Conference and Workshop provides a forum for government, industry, and academic communities performing research and technology development for advanced digital communications, navigation, and surveillance security systems and associated applications supporting the national and global air transportation systems. The event s goals are to understand current efforts and recent results in near- and far-term research and technology demonstration; identify integrated digital communications, navigation and surveillance research requirements necessary for a safe, high-capacity, advanced air transportation system; foster collaboration and coordination among all stakeholders; and discuss critical issues and develop recommendations to achieve the future integrated CNS vision for the national and global air transportation system.
NASA Technical Reports Server (NTRS)
Fujikawa, Gene (Compiler)
2004-01-01
The Integrated Communications, Navigational and Surveillance (ICNS) Technologies Conference and Workshop provides a forum for Government, industry, and academic communities performing research and technology development for advanced digital communications, navigation, and surveillance security systems and associated applications supporting the national and global air transportation systems. The event's goals are to understand current efforts and recent results in near-and far-term research and technology demonstration; identify integrated digital communications, navigation and surveillance research requirements necessary for a safe, high-capacity, advanced air transportation system; foster collaboration and coordination among all stakeholders; and discuss critical issues and develop recommendations to achieve the future integrated CNS vision for the national and global air transportation system.
An embedded vision system for an unmanned four-rotor helicopter
NASA Astrophysics Data System (ADS)
Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James
2006-10-01
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
Design of verification platform for wireless vision sensor networks
NASA Astrophysics Data System (ADS)
Ye, Juanjuan; Shang, Fei; Yu, Chuang
2017-08-01
At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.
LIRIS flight database and its use toward noncooperative rendezvous
NASA Astrophysics Data System (ADS)
Mongrard, O.; Ankersen, F.; Casiez, P.; Cavrois, B.; Donnard, A.; Vergnol, A.; Southivong, U.
2018-06-01
ESA's fifth and last Automated Transfer Vehicle, ATV Georges Lemaître, tested new rendezvous technology before docking with the International Space Station (ISS) in August 2014. The technology demonstration called Laser Infrared Imaging Sensors (LIRIS) provides an unseen view of the ISS. During Georges Lemaître's rendezvous, LIRIS sensors, composed of two infrared cameras, one visible camera, and a scanning LIDAR (Light Detection and Ranging), were turned on two and a half hours and 3500 m from the Space Station. All sensors worked as expected and a large amount of data was recorded and stored within ATV-5's cargo hold before being returned to Earth with the Soyuz flight 38S in September 2014. As a part of the LIRIS postflight activities, the information gathered by all sensors is collected inside a flight database together with the reference ATV trajectory and attitude estimated by ATV main navigation sensors. Although decoupled from the ATV main computer, the LIRIS data were carefully synchronized with ATV guidance, navigation, and control (GNC) data. Hence, the LIRIS database can be used to assess the performance of various image processing algorithms to provide range and line-of-sight (LoS) navigation at long/medium range but also 6 degree-of-freedom (DoF) navigation at short range. The database also contains information related to the overall ATV position with respect to Earth and the Sun direction within ATV frame such that the effect of the environment on the sensors can also be investigated. This paper introduces the structure of the LIRIS database and provides some example of applications to increase the technology readiness level of noncooperative rendezvous.
Bioelectronic retinal prosthesis
NASA Astrophysics Data System (ADS)
Weiland, James D.
2016-05-01
Retinal prosthesis have been translated to clinical use over the past two decades. Currently, two devices have regulatory approval for the treatment of retinitis pigmentosa and one device is in clinical trials for treatment of age-related macular degeneration. These devices provide partial sight restoration and patients use this improved vision in their everyday lives to navigate and to detect large objects. However, significant vision restoration will require both better technology and improved understanding of the interaction between electrical stimulation and the retina. In particular, current retinal prostheses do not provide peripheral visions due to technical and surgical limitations, thus limiting the effectiveness of the treatment. This paper reviews recent results from human implant patients and presents technical approaches for peripheral vision.
Altair Navigation During Trans-Lunar Cruise, Lunar Orbit, Descent and Landing
NASA Technical Reports Server (NTRS)
Ely, Todd A.; Heyne, Martin; Riedel, Joseph E.
2010-01-01
The Altair lunar lander navigation system is driven by a set of requirements that not only specify a need to land within 100 m of a designated spot on the Moon, but also be capable of a safe return to an orbiting Orion capsule in the event of loss of Earth ground support. These requirements lead to the need for a robust and capable on-board navigation system that works in conjunction with an Earth ground navigation system that uses primarily ground-based radiometric tracking. The resulting system relies heavily on combining a multiplicity of data types including navigation state updates from the ground based navigation system, passive optical imaging from a gimbaled camera, a stable inertial measurement unit, and a capable radar altimeter and velocimeter. The focus of this paper is on navigation performance during the trans-lunar cruise, lunar orbit, and descent/landing mission phases with the goal of characterizing knowledge and delivery errors to key mission events, bound the statistical delta V costs for executing the mission, as well as the determine the landing dispersions due to navigation. This study examines the nominal performance that can be obtained using the current best estimate of the vehicle, sensor, and environment models. Performance of the system under a variety sensor outages and parametric trades is also examined.
NASA Astrophysics Data System (ADS)
Pierrottet, Diego; Amzajerdian, Farzin; Petway, Larry; Barnes, Bruce; Lockard, George; Hines, Glenn
2011-06-01
An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high-resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over various terrains. The sensor was one of several sensors tested in this field test by NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.
NASA Technical Reports Server (NTRS)
Pierrottet, Diego F.; Lockhard, George; Amzajerdian, Farzin; Petway, Larry B.; Barnes, Bruce; Hines, Glenn D.
2011-01-01
An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over vegetation free terrain. The sensor was one of several sensors tested in this field test by NASA?s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.
A Solar Position Sensor Based on Image Vision.
Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Acuña, Alexis; Rosales, Pedro; Suastegui, José
2017-07-29
Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors' evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays' direction as well as the tilt and sensor position. The sensor's characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors.
Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.
Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders
2017-10-01
The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].
Autonomous Wheeled Robot Platform Testbed for Navigation and Mapping Using Low-Cost Sensors
NASA Astrophysics Data System (ADS)
Calero, D.; Fernandez, E.; Parés, M. E.
2017-11-01
This paper presents the concept of an architecture for a wheeled robot system that helps researchers in the field of geomatics to speed up their daily research on kinematic geodesy, indoor navigation and indoor positioning fields. The presented ideas corresponds to an extensible and modular hardware and software system aimed at the development of new low-cost mapping algorithms as well as at the evaluation of the performance of sensors. The concept, already implemented in the CTTC's system ARAS (Autonomous Rover for Automatic Surveying) is generic and extensible. This means that it is possible to incorporate new navigation algorithms or sensors at no maintenance cost. Only the effort related to the development tasks required to either create such algorithms needs to be taken into account. As a consequence, change poses a much small problem for research activities in this specific area. This system includes several standalone sensors that may be combined in different ways to accomplish several goals; that is, this system may be used to perform a variety of tasks, as, for instance evaluates positioning algorithms performance or mapping algorithms performance.
Cognitive load of navigating without vision when guided by virtual sound versus spatial language.
Klatzky, Roberta L; Marston, James R; Giudice, Nicholas A; Golledge, Reginald G; Loomis, Jack M
2006-12-01
A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
ALHAT COBALT: CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M.
2015-01-01
The COBALT project is a flight demonstration of two NASA ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) capabilities that are key for future robotic or human landing GN&C (Guidance, Navigation and Control) systems. The COBALT payload integrates the Navigation Doppler Lidar (NDL) for ultraprecise velocity and range measurements with the Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. Terrestrial flight tests of the COBALT payload in an open-loop and closed-loop GN&C configuration will be conducted onboard a commercial, rocket-propulsive Vertical Test Bed (VTB) at a test range in Mojave, CA.
The Effects of Restricted Peripheral Field-of-View on Spatial Learning while Navigating.
Barhorst-Cates, Erica M; Rand, Kristina M; Creem-Regehr, Sarah H
2016-01-01
Recent work with simulated reductions in visual acuity and contrast sensitivity has found decrements in survey spatial learning as well as increased attentional demands when navigating, compared to performance with normal vision. Given these findings, and previous work showing that peripheral field loss has been associated with impaired mobility and spatial memory for room-sized spaces, we investigated the role of peripheral vision during navigation using a large-scale spatial learning paradigm. First, we aimed to establish the magnitude of spatial memory errors at different levels of field restriction. Second, we tested the hypothesis that navigation under these different levels of restriction would use additional attentional resources. Normally sighted participants walked on novel real-world paths wearing goggles that restricted the field-of-view (FOV) to severe (15°, 10°, 4°, or 0°) or mild angles (60°) and then pointed to remembered target locations using a verbal reporting measure. They completed a concurrent auditory reaction time task throughout each path to measure cognitive load. Only the most severe restrictions (4° and blindfolded) showed impairment in pointing error compared to the mild restriction (within-subjects). The 10° and 4° conditions also showed an increase in reaction time on the secondary attention task, suggesting that navigating with these extreme peripheral field restrictions demands the use of limited cognitive resources. This comparison of different levels of field restriction suggests that although peripheral field loss requires the actor to use more attentional resources while navigating starting at a less extreme level (10°), spatial memory is not negatively affected until the restriction is very severe (4°). These results have implications for understanding of the mechanisms underlying spatial learning during navigation and the approaches that may be taken to develop assistance for navigation with visual impairment.
Advanced Integration of WiFi and Inertial Navigation Systems for Indoor Mobile Positioning
NASA Astrophysics Data System (ADS)
Evennou, Frédéric; Marx, François
2006-12-01
This paper presents an aided dead-reckoning navigation structure and signal processing algorithms for self localization of an autonomous mobile device by fusing pedestrian dead reckoning and WiFi signal strength measurements. WiFi and inertial navigation systems (INS) are used for positioning and attitude determination in a wide range of applications. Over the last few years, a number of low-cost inertial sensors have become available. Although they exhibit large errors, WiFi measurements can be used to correct the drift weakening the navigation based on this technology. On the other hand, INS sensors can interact with the WiFi positioning system as they provide high-accuracy real-time navigation. A structure based on a Kalman filter and a particle filter is proposed. It fuses the heterogeneous information coming from those two independent technologies. Finally, the benefits of the proposed architecture are evaluated and compared with the pure WiFi and INS positioning systems.
Indoor Pedestrian Navigation Using Foot-Mounted IMU and Portable Ultrasound Range Sensors
Girard, Gabriel; Côté, Stéphane; Zlatanova, Sisi; Barette, Yannick; St-Pierre, Johanne; van Oosterom, Peter
2011-01-01
Many solutions have been proposed for indoor pedestrian navigation. Some rely on pre-installed sensor networks, which offer good accuracy but are limited to areas that have been prepared for that purpose, thus requiring an expensive and possibly time-consuming process. Such methods are therefore inappropriate for navigation in emergency situations since the power supply may be disturbed. Other types of solutions track the user without requiring a prepared environment. However, they may have low accuracy. Offline tracking has been proposed to increase accuracy, however this prevents users from knowing their position in real time. This paper describes a real time indoor navigation system that does not require prepared building environments and provides tracking accuracy superior to previously described tracking methods. The system uses a combination of four techniques: foot-mounted IMU (Inertial Motion Unit), ultrasonic ranging, particle filtering and model-based navigation. The very purpose of the project is to combine these four well-known techniques in a novel way to provide better indoor tracking results for pedestrians. PMID:22164034
A laser-based vision system for weld quality inspection.
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.
A Laser-Based Vision System for Weld Quality Inspection
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Recent Advances in Bathymetric Surveying of Continental Shelf Regions Using Autonomous Vehicles
NASA Astrophysics Data System (ADS)
Holland, K. T.; Calantoni, J.; Slocum, D.
2016-02-01
Obtaining bathymetric observations within the continental shelf in areas closer to the shore is often time consuming and dangerous, especially when uncharted shoals and rocks present safety concerns to survey ships and launches. However, surveys in these regions are critically important to numerical simulation of oceanographic processes, as bathymetry serves as the bottom boundary condition in operational forecasting models. We will present recent progress in bathymetric surveying using both traditional vessels retrofitted for autonomous operations and relatively inexpensive, small team deployable, Autonomous Underwater Vehicles (AUV). Both systems include either high-resolution multibeam echo sounders or interferometric sidescan sonar sensors with integrated inertial navigation system capabilities consistent with present commercial-grade survey operations. The advantages and limitations of these two configurations employing both unmanned and autonomous strategies are compared using results from several recent survey operations. We will demonstrate how sensor data collected from unmanned platforms can augment or even replace traditional data collection technologies. Oceanographic observations (e.g., sound speed, temperature and currents) collected simultaneously with bathymetry using autonomous technologies provide additional opportunities for advanced data assimilation in numerical forecasts. Discussion focuses on our vision for unmanned and autonomous systems working in conjunction with manned or in-situ systems to optimally and simultaneously collect data in environmentally hostile or difficult to reach areas.
Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control
Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda
2017-01-01
Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations. PMID:28406449
Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control.
Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda
2017-04-13
Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations.
Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach.
Liu, Mengyun; Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng; Pan, Yuanjin
2017-12-08
After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas.
Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach
Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng
2017-01-01
After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas. PMID:29292761
Present and future of vision systems technologies in commercial flight operations
NASA Astrophysics Data System (ADS)
Ward, Jim
2016-05-01
The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.
An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chien, T. T.
1972-01-01
An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
NASA Astrophysics Data System (ADS)
Moody, Marc; Fisher, Robert; Little, J. Kristin
2014-06-01
Boeing has developed a degraded visual environment navigational aid that is flying on the Boeing AH-6 light attack helicopter. The navigational aid is a two dimensional software digital map underlay generated by the Boeing™ Geospatial Embedded Mapping Software (GEMS) and fully integrated with the operational flight program. The page format on the aircraft's multi function displays (MFD) is termed the Approach page. The existing work utilizes Digital Terrain Elevation Data (DTED) and OpenGL ES 2.0 graphics capabilities to compute the pertinent graphics underlay entirely on the graphics processor unit (GPU) within the AH-6 mission computer. The next release will incorporate cultural databases containing Digital Vertical Obstructions (DVO) to warn the crew of towers, buildings, and power lines when choosing an opportune landing site. Future IRAD will include Light Detection and Ranging (LIDAR) point cloud generating sensors to provide 2D and 3D synthetic vision on the final approach to the landing zone. Collision detection with respect to terrain, cultural, and point cloud datasets may be used to further augment the crew warning system. The techniques for creating the digital map underlay leverage the GPU almost entirely, making this solution viable on most embedded mission computing systems with an OpenGL ES 2.0 capable GPU. This paper focuses on the AH-6 crew interface process for determining a landing zone and flying the aircraft to it.
A low-cost test-bed for real-time landmark tracking
NASA Astrophysics Data System (ADS)
Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher
2007-04-01
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
NASA Technical Reports Server (NTRS)
Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2007-01-01
Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
[Interest of non invasive navigation in total knee arthroplasty].
Zorman, D; Leclercq, G; Cabanas, J Juanos; Jennart, H
2015-01-01
During surgery of total knee arthroplasty, we use a computerized non invasive navigation (Brainlab Victor Vision CT-free) to assess the accuracy of the bone cuts (navigation expresse). The purpose of this study is to evaluate non invasive navigation when a total knee arthroplasty is achieved by conventional instrumentation. The study is based on forty total knee arthroplasties. The accuracy of the tibial and distal femoral bone cuts, checked by non invasive navigation, is evaluated prospectively. In our clinical series, we have obtained, with the conventional instrumentation, a correction of the mechanical axis only in 90 % of cases (N = 36). With non invasive navigation, we improved the positioning of implants and obtained in all cases the desired axiometry in the frontal plane. Although operative time is increased by about 15 minutes, the non invasive navigation does not induce intraoperative or immediate postoperative complications. Despite the cost of this technology, we believe that the reliability of the procedure is enhanced by a simple and reproducible technique.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
Real-time MRI-guided needle intervention for cryoablation: a phantom study
NASA Astrophysics Data System (ADS)
Gao, Wenpeng; Jiang, Baichuan; Kacher, Dan F.; Fetics, Barry; Nevo, Erez; Lee, Thomas C.; Jayender, Jagadeesan
2017-03-01
MRI-guided needle intervention for cryoablation is a promising way to relieve the pain and treat the cancer. However, the limited size of MRI bore makes it impossible for clinicians to perform the operation in the bore. The patients had to be moved into the bore for scanning to verify the position of the needle's tip and out for adjusting the needle's trajectory. Real-time needle tracking and shown in MR images is of importance for clinicians to perform the operation more efficiently. In this paper, we have instrumented the cryotherapy needle with a MRI-safe electromagnetic (EM) sensor and optical sensor to measure the needle's position and orientation. To overcome the limitation of line-of-sight for optical sensor and the poor dynamic performance of the EM sensor, Kalman filter based data fusion is developed. Further, we developed a navigation system in open-source software, 3D Slicer, to provide accurate visualization of the needle and the surrounding anatomy. Experiment of simulation the needle intervention at the entrance was performed with a realistic spine phantom to quantify the accuracy of the navigation using the retrospective analysis method. Eleven trials of needle insertion were performed independently. The target accuracy with the navigation using only EM sensor, only optical sensor and data fusion are 2.27 +/-1.60 mm, 4.11 +/- 1.77 mm and 1.91 - 1.10 mm, respectively.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.
1985-07-01
AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more
A Solar Position Sensor Based on Image Vision
Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Rosales, Pedro; Suastegui, José
2017-01-01
Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors’ evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays’ direction as well as the tilt and sensor position. The sensor’s characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors. PMID:28758935
COBALT CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M. III; Restrepo, Carolina I.; Robertson, Edward A.; Seubert, Carl R.; Amzajerdian, Farzin
2016-01-01
COBALT is a terrestrial test platform for development and maturation of GN&C (Guidance, Navigation and Control) technologies for PL&HA (Precision Landing and Hazard Avoidance). The project is developing a third generation, Langley Navigation Doppler Lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the JPL Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. These technologies together provide navigation that enables controlled precision landing. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive Vertical Test Bed (VTB) developed by Masten Space Systems (MSS), and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).
Overview of Fiber-Optical Sensors
NASA Technical Reports Server (NTRS)
Depaula, Ramon P.; Moore, Emery L.
1987-01-01
Design, development, and sensitivity of sensors using fiber optics reviewed. State-of-the-art and probable future developments of sensors using fiber optics described in report including references to work in field. Serves to update previously published surveys. Systems incorporating fiber-optic sensors used in medical diagnosis, navigation, robotics, sonar, power industry, and industrial controls.
Flight test of a passive millimeter-wave imaging system
NASA Astrophysics Data System (ADS)
Martin, Christopher A.; Manning, Will; Kolinko, Vladimir G.; Hall, Max
2005-05-01
A real-time passive millimeter-wave imaging system with a wide-field of view and 3K temperature sensitivity is described. The system was flown on a UH-1H helicopter in a flight test conducted by the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). We collected approximately eight hours of data over the course of the two-week flight test. Flight data was collected in horizontal and vertical polarizations at look down angles from 0 to 40 degrees. Speeds varied from 0 to 90 knots and altitudes varied from 0' to 1000'. Targets imaged include roads, freeways, railroads, houses, industrial buildings, power plants, people, streams, rivers, bridges, cars, trucks, trains, boats, planes, runways, treelines, shorelines, and the horizon. The imaging system withstood vibration and temperature variations, but experienced some RF interference. The flight test demonstrated the system's capabilities as an airborne navigation and surveillance aid. It also performed in a personnel recovery scenario.
Pre-Capture Privacy for Small Vision Sensors.
Pittaluga, Francesco; Koppal, Sanjeev Jagannatha
2017-11-01
The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.
Siegelaar, Sarah E; Barwari, Temo; Hermanides, Jeroen; van der Voort, Peter H J; Hoekstra, Joost B L; DeVries, J Hans
2013-11-01
Continuous glucose monitoring could be helpful for glucose regulation in critically ill patients; however, its accuracy is uncertain and might be influenced by microcirculation. We investigated the microcirculation and its relation to the accuracy of 2 continuous glucose monitoring devices in patients after cardiac surgery. The present prospective, observational study included 60 patients admitted for cardiac surgery. Two continuous glucose monitoring devices (Guardian Real-Time and FreeStyle Navigator) were placed before surgery. The relative absolute deviation between continuous glucose monitoring and the arterial reference glucose was calculated to assess the accuracy. Microcirculation was measured using the microvascular flow index, perfused vessel density, and proportion of perfused vessels using sublingual sidestream dark-field imaging, and tissue oxygenation using near-infrared spectroscopy. The associations were assessed using a linear mixed-effects model for repeated measures. The median relative absolute deviation of the Navigator was 11% (interquartile range, 8%-16%) and of the Guardian was 14% (interquartile range, 11%-18%; P = .05). Tissue oxygenation significantly increased during the intensive care unit admission (maximum 91.2% [3.9] after 6 hours) and decreased thereafter, stabilizing after 20 hours. A decrease in perfused vessel density accompanied the increase in tissue oxygenation. Microcirculatory variables were not associated with sensor accuracy. A lower peripheral temperature (Navigator, b = -0.008, P = .003; Guardian, b = -0.006, P = .048), and for the Navigator, also a higher Acute Physiology and Chronic Health Evaluation IV predicted mortality (b = 0.017, P < .001) and age (b = 0.002, P = .037) were associated with decreased sensor accuracy. The results of the present study have shown acceptable accuracy for both sensors in patients after cardiac surgery. The microcirculation was impaired to a limited extent compared with that in patients with sepsis and healthy controls. This impairment was not related to sensor accuracy but the peripheral temperature for both sensors and patient age and Acute Physiology and Chronic Health Evaluation IV predicted mortality for the Navigator were. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW, were designed to automate functions and decisions associated with a combat aircraft's subsystem. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base, and to assess the cooperation between the rule-bases. Each AUTOCREW subsystem is composed of several expert systems that perform specific tasks. AUTOCREW's NAVIGATOR was analyzed in detail to understand the difficulties involved in designing the system and to identify tools and methodologies that ease development. The NAVIGATOR determines optimal navigation strategies from a set of available sensors. A Navigation Sensor Management (NSM) expert system was systematically designed from Kalman filter covariance data; four ground-based, a satellite-based, and two on-board INS-aiding sensors were modeled and simulated to aid an INS. The NSM Expert was developed using the Analysis of Variance (ANOVA) and the ID3 algorithm. Navigation strategy selection is based on an RSS position error decision metric, which is computed from the covariance data. Results show that the NSM Expert predicts position error correctly between 45 and 100 percent of the time for a specified navaid configuration and aircraft trajectory. The NSM Expert adapts to new situations, and provides reasonable estimates of hybrid performance. The systematic nature of the ANOVA/ID3 method makes it broadly applicable to expert system design when experimental or simulation data is available.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Integrated INS/GPS Navigation from a Popular Perspective
NASA Technical Reports Server (NTRS)
Omerbashich, Mensur
2002-01-01
Inertial navigation, blended with other navigation aids, Global Positioning System (GPS) in particular, has gained significance due to enhanced navigation and inertial reference performance and dissimilarity for fault tolerance and anti-jamming. Relatively new concepts based upon using Differential GPS (DGPS) blended with Inertial (and visual) Navigation Sensors (INS) offer the possibility of low cost, autonomous aircraft landing. The FAA has decided to implement the system in a sophisticated form as a new standard navigation tool during this decade. There have been a number of new inertial sensor concepts in the recent past that emphasize increased accuracy of INS/GPS versus INS and reliability of navigation, as well as lower size and weight, and higher power, fault tolerance, and long life. The principles of GPS are not discussed; rather the attention is directed towards general concepts and comparative advantages. A short introduction to the problems faced in kinematics is presented. The intention is to relate the basic principles of kinematics to probably the most used navigation method in the future-INS/GPS. An example of the airborne INS is presented, with emphasis on how it works. The discussion of the error types and sources in navigation, and of the role of filters in optimal estimation of the errors then follows. The main question this paper is trying to answer is 'What are the benefits of the integration of INS and GPS and how is this, navigation concept of the future achieved in reality?' The main goal is to communicate the idea about what stands behind a modern navigation method.
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-01
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Radar range data signal enhancement tracker
NASA Technical Reports Server (NTRS)
1975-01-01
The design, fabrication, and performance characteristics are described of two digital data signal enhancement filters which are capable of being inserted between the Space Shuttle Navigation Sensor outputs and the guidance computer. Commonality of interfaces has been stressed so that the filters may be evaluated through operation with simulated sensors or with actual prototype sensor hardware. The filters will provide both a smoothed range and range rate output. Different conceptual approaches are utilized for each filter. The first filter is based on a combination low pass nonrecursive filter and a cascaded simple average smoother for range and range rate, respectively. Filter number two is a tracking filter which is capable of following transient data of the type encountered during burn periods. A test simulator was also designed which generates typical shuttle navigation sensor data.
Meng, Zhijun; Yang, Jun; Guo, Xiye; Zhou, Yongbin
2017-01-01
Global Navigation Satellite System performance can be significantly enhanced by introducing inter-satellite links (ISLs) in navigation constellation. The improvement in position, velocity, and time accuracy as well as the realization of autonomous functions requires ISL distance measurement data as the original input. To build a high-performance ISL, the ranging consistency among navigation satellites is an urgent problem to be solved. In this study, we focus on the variation in the ranging delay caused by the sensitivity of the ISL payload equipment to the ambient temperature in space and propose a simple and low-power temperature-sensing ranging compensation sensor suitable for onboard equipment. The experimental results show that, after the temperature-sensing ranging compensation of the ISL payload equipment, the ranging consistency becomes less than 0.2 ns when the temperature change is 90 °C. PMID:28608809
Absolute Navigation Performance of the Orion Exploration Fight Test 1
NASA Technical Reports Server (NTRS)
Zanetti, Renato; Holt, Greg; Gay, Robert; D'Souza, Christopher; Sud, Jastesh
2016-01-01
Launched in December 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion vehicle's Exploration Flight Test-1 (EFT-1) successfully completed the objective to stress the system by placing the un-crewed vehicle on a high-energy parabolic trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. Unique challenges associated with designing the navigation system for EFT-1 are presented with an emphasis on how redundancy and robustness influenced the architecture. Two Inertial Measurement Units (IMUs), one GPS receiver and three barometric altimeters (BALTs) comprise the navigation sensor suite. The sensor data is multiplexed using conventional integration techniques and the state estimate is refined by the GPS pseudorange and deltarange measurements in an Extended Kalman Filter (EKF) that employs UDU factorization. The performance of the navigation system during flight is presented to substantiate the design.