Sample records for vision based navigation

  1. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    NASA Astrophysics Data System (ADS)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  2. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  3. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  4. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    PubMed

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  5. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database

    PubMed Central

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-01

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496

  6. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  7. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  8. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.

  9. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  10. Parametric study of sensor placement for vision-based relative navigation system of multiple spacecraft

    NASA Astrophysics Data System (ADS)

    Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung

    2017-12-01

    In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.

  11. Vision-based navigation in a dynamic environment for virtual human

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  12. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  13. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  14. Improving Car Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  15. Integrated navigation, flight guidance, and synthetic vision system for low-level flight

    NASA Astrophysics Data System (ADS)

    Mehler, Felix E.

    2000-06-01

    Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.

  16. Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association

    DTIC Science & Technology

    2011-03-24

    based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently

  17. Neural Network-Based Landmark Recognition and Navigation with IAMRs. Understanding the Principles of Thought and Behavior.

    ERIC Educational Resources Information Center

    Doty, Keith L.

    1999-01-01

    Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)

  18. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  19. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  20. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

    PubMed Central

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-01-01

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533

  1. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.

    PubMed

    Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling

    2015-09-10

    In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

  2. Precision of computer-assisted core decompression drilling of the knee.

    PubMed

    Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-06-01

    Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.

  3. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  4. Neural correlates of virtual route recognition in congenital blindness.

    PubMed

    Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice

    2010-07-13

    Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.

  5. Vision-based mapping with cooperative robots

    NASA Astrophysics Data System (ADS)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  6. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  7. Vision Based Navigation for Autonomous Cooperative Docking of CubeSats

    NASA Astrophysics Data System (ADS)

    Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker

    2018-05-01

    A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.

  8. Vision Sensor-Based Road Detection for Field Robot Navigation

    PubMed Central

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  9. The role of vision for navigation in the crown-of-thorns seastar, Acanthaster planci

    PubMed Central

    Sigl, Robert; Steibl, Sebastian; Laforsch, Christian

    2016-01-01

    Coral reefs all over the Indo-Pacific suffer from substantial damage caused by the crown-of-thorns seastar Acanthaster planci, a voracious predator that moves on and between reefs to seek out its coral prey. Chemoreception is thought to guide A. planci. As vision was recently introduced as another sense involved in seastar navigation, we investigated the potential role of vision for navigation in A. planci. We estimated the spatial resolution and visual field of the compound eye using histological sections and morphometric measurements. Field experiments in a semi-controlled environment revealed that vision in A. planci aids in finding reef structures at a distance of at least 5 m, whereas chemoreception seems to be effective only at very short distances. Hence, vision outweighs chemoreception at intermediate distances. A. planci might use vision to navigate between reef structures and to locate coral prey, therefore improving foraging efficiency, especially when multidirectional currents and omnipresent chemical cues on the reef hamper chemoreception. PMID:27476750

  10. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  11. Seamless positioning and navigation by using geo-referenced images and multi-sensor data.

    PubMed

    Li, Xun; Wang, Jinling; Li, Tao

    2013-07-12

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.

  12. Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data

    PubMed Central

    Li, Xun; Wang, Jinling; Li, Tao

    2013-01-01

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267

  13. Teaching with Vision: Culturally Responsive Teaching in Standards-Based Classrooms

    ERIC Educational Resources Information Center

    Sleeter, Christine E., Ed.; Cornbleth, Catherine, Ed.

    2011-01-01

    In "Teaching with Vision," two respected scholars in teaching for social justice have gathered teachers from across the country to describe rich examples of extraordinary practice. This collection showcases the professional experience and wisdom of classroom teachers who have been navigating standards- and test-driven teaching environments in…

  14. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  15. Visual navigation using edge curve matching for pinpoint planetary landing

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  16. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  17. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  18. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  19. 46 CFR 72.04-1 - Navigation bridge visibility.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... meet the following requirements: (a) The field of vision from the navigation bridge, whether the vessel... degrees. (2) From the conning position, the horizontal field of vision extends over an arc from at least...) From each bridge wing, the field of vision extends over an arc from at least 45 degrees on the opposite...

  20. 46 CFR 190.02-1 - Navigation bridge visibility.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... September 7, 1990, must meet the following requirements: (a) The field of vision from the navigation bridge... not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends over an...)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at least...

  1. 46 CFR 72.04-1 - Navigation bridge visibility.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... meet the following requirements: (a) The field of vision from the navigation bridge, whether the vessel... degrees. (2) From the conning position, the horizontal field of vision extends over an arc from at least...) From each bridge wing, the field of vision extends over an arc from at least 45 degrees on the opposite...

  2. 46 CFR 108.801 - Navigation bridge visibility.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... September 7, 1990, must meet the following requirements: (a) The field of vision from the navigation bridge... not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends over an...)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at least...

  3. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  4. Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.

    PubMed

    Wang, Lin; Yang, Liancheng; Dagnelie, Gislin

    2008-11-01

    To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.

  5. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  6. Guidance, Navigation and Control Innovations at the NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ericsson, Aprille Joy

    2002-01-01

    A viewgraph presentation on guidance navigation and control innovations at the NASA Goddard Space Flight Center is presented. The topics include: 1) NASA's vision; 2) NASA's Mission; 3) Earth Science Enterprise (ESE); 4) Guidance, Navigation and Control Division (GN&C); 5) Landsat-7 Earth Observer-1 Co-observing Program; and 6) NASA ESE Vision.

  7. 46 CFR 32.16-1 - Navigation bridge visibility-T/ALL.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., must meet the following requirements: (a) The field of vision from the navigation bridge, whether the... degrees. (2) From the conning position, the horizontal field of vision extends over an arc from at least...) From each bridge wing, the field of vision extends over an arc from at least 45 degrees on the opposite...

  8. 33 CFR 164.15 - Navigation bridge visibility.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... ports must be such that the field of vision from the navigation bridge conforms as closely as possible... horizontal field of vision must extend over an arc from at least 22.5 degrees abaft the beam on one side of... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...

  9. 33 CFR 164.15 - Navigation bridge visibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... ports must be such that the field of vision from the navigation bridge conforms as closely as possible... horizontal field of vision must extend over an arc from at least 22.5 degrees abaft the beam on one side of... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...

  10. Integrity Determination for Image Rendering Vision Navigation

    DTIC Science & Technology

    2016-03-01

    identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or

  11. Psychophysics of reading. XVII. Low-vision performance with four types of electronically magnified text.

    PubMed

    Harland, S; Legge, G E; Luebker, A

    1998-03-01

    Most people with low vision need magnification to read. Page navigation is the process of moving a magnifier during reading. Modern electronic technology can provide many alternatives for navigating through text. This study compared reading speeds for four methods of displaying text. The four methods varied in their page-navigation demands. The closed-circuit television (CCTV) and MOUSE methods involved manual navigation. The DRIFT method (horizontally drifting text) involved no manual navigation, but did involve both smooth-pursuit and saccadic eye movements. The rapid serial visual presentation (RSVP) method involved no manual navigation, and relatively few eye movements. There were 7 normal subjects and 12 low-vision subjects (7 with central-field loss, CFL group, and 5 with central fields intact, CFI group). The subjects read 70-word passages at speeds that yielded good comprehension. Taking the CCTV reading speed as a benchmark, neither the normal nor low-vision subjects had significantly different speeds with the MOUSE method. As expected from the reduced navigational demands, normal subjects read faster with the DRIFT method (85% faster) and the RSVP method (169%). The CFI group read significantly faster with DRIFT (43%) and RSVP (38%). The CFL group showed no significant differences in reading speed for the four methods.

  12. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    Multiple-Purpose Crew Vehicle (MPVC), which will be provided with a LIDAR sensor as primary relative navigation system [26, 33, 34]. A drawback of LIDAR...328–352, 2009. [63] C. Luigini and M. Romano, “A ballistic- pendulum test stand to characterize small cold-gas thruster nozzles,” Acta

  13. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  14. Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision

    NASA Astrophysics Data System (ADS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-06-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  15. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  16. Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.; Connolly, Joseph W.

    2006-01-01

    The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.

  17. [Navigated drilling for femoral head necrosis. Experimental and clinical results].

    PubMed

    Beckmann, J; Tingart, M; Perlick, L; Lüring, C; Grifka, J; Anders, S

    2007-05-01

    In the early stages of osteonecrosis of the femoral head, core decompression by exact drilling into the ischemic areas can reduce pain and achieve reperfusion. Using computer aided surgery, the precision of the drilling can be improved while simultaneously lowering radiation exposure time for both staff and patients. We describe the experimental and clinical results of drilling under the guidance of the fluoroscopically-based VectorVision navigation system (BrainLAB, Munich, Germany). A total of 70 sawbones were prepared mimicking an osteonecrosis of the femoral head. In two experimental models, bone only and obesity, as well as in a clinical setting involving ten patients with osteonecrosis of the femoral head, the precision and the duration of radiation exposure were compared between the VectorVision system and conventional drilling. No target was missed. For both models, there was a statistically significant difference in terms of the precision, the number of drilling corrections as well as the radiation exposure time. The average distance to the desired midpoint of the lesion of both models was 0.48 mm for navigated drilling and 1.06 mm for conventional drilling, the average drilling corrections were 0.175 and 2.1, and the radiation exposure time less than 1 s and 3.6 s, respectively. In the clinical setting, the reduction of radiation exposure (below 1 s for navigation compared to 56 s for the conventional technique) as well as of drilling corrections (0.2 compared to 3.4) was also significant. Computer guided drilling using the fluoroscopically based VectorVision navigation system shows a clearly improved precision with a enormous simultaneous reduction in radiation exposure. It is therefore recommended for clinical routine.

  18. Precision of computer-assisted core decompression drilling of the femoral head.

    PubMed

    Beckmann, J; Goetz, J; Baethis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-08-01

    Osteonecrosis of the femoral head is a local destructive disease with progression into devastating stages. Left untreated it mostly leads to severe secondary osteoarthrosis and early endoprosthetic joint replacement. Core decompression by exact drilling into the ischemic areas can be performed in early stages according to Ficat or ARCO. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. Twenty sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany) and 10 sawbones by fluoroscopic control only. No gypsum sphere was missed. There was a statistically significant difference regarding the three-dimensional deviation (Euclidian norm) as well as maximum deviation in x-, y- or z-direction (maximum norm) to the desired mid-point of the lesion, with a mean of 0.51 and 0.4 mm in the navigated group and 1.1 and 0.88 mm in the control group, respectively. Furthermore, significant difference was found in the number of drilling corrections as well as the radiation time needed: no second drilling or correction of drilling direction was necessary in the navigated group compared to 1.4 in the control group. The radiation time needed was less than 1 s compared to 3.1 s, respectively. The fluoroscopy-based VectorVision navigation system shows a high feasibility of computer-guided drilling with a clear reduction of radiation exposure time and can therefore be integrated into clinical routine. The additional time needed is acceptable regarding the simultaneous reduction of radiation time.

  19. Maintaining a Cognitive Map in Darkness: The Need to Fuse Boundary Knowledge with Path Integration

    PubMed Central

    Cheung, Allen; Ball, David; Milford, Michael; Wyeth, Gordon; Wiles, Janet

    2012-01-01

    Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's “cognitive map”, or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and – we conjecture – necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments. PMID:22916006

  20. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.

  1. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  2. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  3. Real-time synthetic vision cockpit display for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.

    1999-07-01

    Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.

  4. Vision based techniques for rotorcraft low altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Suorsa, Ray; Smith, Philip

    1991-01-01

    An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.

  5. Precise visual navigation using multi-stereo vision and landmark matching

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  6. Integration of a synthetic vision system with airborne laser range scanner-based terrain referenced navigation for precision approach guidance

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank

    2005-05-01

    Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.

  7. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  8. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

    PubMed Central

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-01-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787

  9. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  10. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    PubMed

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  11. Navigating the Rural Terrain: Educators' Visions to Promote Change

    ERIC Educational Resources Information Center

    Vaughn, Margaret; Saul, Melissa

    2013-01-01

    Advocates of rural education emphasize the need to examine supports which may promote rural educators given the challenging contexts of which they face. Teacher visioning has been conceptualized as a navigational tool to help sustain and promote teachers given high-challenging contexts. The current study explored 10 public school teachers from…

  12. Open-Loop Flight Testing of COBALT Navigation and Sensor Technologies for Precise Soft Landing

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Restrepo, Caroline I.; Seubert, Carl R.; Amzajerdian, Farzin; Pierrottet, Diego F.; Collins, Steven M.; O'Neal, Travis V.; Stelling, Richard

    2017-01-01

    An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) payload was conducted onboard the Masten Xodiac suborbital rocket testbed. The payload integrates two complementary sensor technologies that together provide a spacecraft with knowledge during planetary descent and landing to precisely navigate and softly touchdown in close proximity to targeted surface locations. The two technologies are the Navigation Doppler Lidar (NDL), for high-precision velocity and range measurements, and the Lander Vision System (LVS) for map-relative state esti- mates. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a very precise Terrain Relative Navigation (TRN) solution that is suitable for future, autonomous planetary landing systems that require precise and soft landing capabilities. During the open-loop flight campaign, the COBALT payload acquired measurements and generated a precise navigation solution, but the Xodiac vehicle planned and executed its maneuvers based on an independent, GPS-based navigation solution. This minimized the risk to the vehicle during the integration and testing of the new navigation sensing technologies within the COBALT payload.

  13. Comparison of Orion Vision Navigation Sensor Performance from STS-134 and the Space Operations Simulation Center

    NASA Technical Reports Server (NTRS)

    Christian, John A.; Patangan, Mogi; Hinkel, Heather; Chevray, Keiko; Brazzel, Jack

    2012-01-01

    The Orion Multi-Purpose Crew Vehicle is a new spacecraft being designed by NASA and Lockheed Martin for future crewed exploration missions. The Vision Navigation Sensor is a Flash LIDAR that will be the primary relative navigation sensor for this vehicle. To obtain a better understanding of this sensor's performance, the Orion relative navigation team has performed both flight tests and ground tests. This paper summarizes and compares the performance results from the STS-134 flight test, called the Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective, and the ground tests at the Space Operations Simulation Center.

  14. Multi-Dimensionality of Synthetic Vision Cockpit Displays: Prevention of Controlled-Flight-Into-Terrain

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2006-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results showed the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  15. Enabling Autonomous Navigation for Affordable Scooters.

    PubMed

    Liu, Kaikai; Mulky, Rajathswaroop

    2018-06-05

    Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  16. Navigation studies based on the ubiquitous positioning technologies

    NASA Astrophysics Data System (ADS)

    Ye, Lei; Mi, Weijie; Wang, Defeng

    2007-11-01

    This paper summarized the nowadays positioning technologies, such as absolute positioning methods and relative positioning methods, indoor positioning and outdoor positioning, active positioning and passive positioning. Global Navigation Satellite System (GNSS) technologies were introduced as the omnipresent out-door positioning technologies, including GPS, GLONASS, Galileo and BD-1/2. After analysis of the shortcomings of GNSS, indoor positioning technologies were discussed and compared, including A-GPS, Cellular network, Infrared, Electromagnetism, Computer Vision Cognition, Embedded Pressure Sensor, Ultrasonic, RFID (Radio Frequency IDentification), Bluetooth, WLAN etc.. Then the concept and characteristics of Ubiquitous Positioning was proposed. After the ubiquitous positioning technologies contrast and selection followed by system engineering methodology, a navigation system model based on Incorporate Indoor-Outdoor Positioning Solution was proposed. And this model was simulated in the Galileo Demonstration for World Expo Shanghai project. In the conclusion, the prospects of ubiquitous positioning based navigation were shown, especially to satisfy the public location information acquiring requirement.

  17. PRoViScout: a planetary scouting rover demonstrator

    NASA Astrophysics Data System (ADS)

    Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos

    2012-01-01

    Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.

  18. A Kalman Approach to Lunar Surface Navigation using Radiometric and Inertial Measurements

    NASA Technical Reports Server (NTRS)

    Chelmins, David T.; Welch, Bryan W.; Sands, O. Scott; Nguyen, Binh V.

    2009-01-01

    Future lunar missions supporting the NASA Vision for Space Exploration will rely on a surface navigation system to determine astronaut position, guide exploration, and return safely to the lunar habitat. In this report, we investigate one potential architecture for surface navigation, using an extended Kalman filter to integrate radiometric and inertial measurements. We present a possible infrastructure to support this technique, and we examine an approach to simulating navigational accuracy based on several different system configurations. The results show that position error can be reduced to 1 m after 5 min of processing, given two satellites, one surface communication terminal, and knowledge of the starting position to within 100 m.

  19. 46 CFR 32.16-1 - Navigation bridge visibility-T/ALL.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Navigation bridge visibility-T/ALL. 32.16-1 Section 32..., AND HULL REQUIREMENTS Navigation Bridge Visibility § 32.16-1 Navigation bridge visibility-T/ALL. Each..., must meet the following requirements: (a) The field of vision from the navigation bridge, whether the...

  20. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    PubMed

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  1. Three spectrally distinct photoreceptors in diurnal and nocturnal Australian ants.

    PubMed

    Ogawa, Yuri; Falkowski, Marcin; Narendra, Ajay; Zeil, Jochen; Hemmi, Jan M

    2015-06-07

    Ants are thought to be special among Hymenopterans in having only dichromatic colour vision based on two spectrally distinct photoreceptors. Many ants are highly visual animals, however, and use vision extensively for navigation. We show here that two congeneric day- and night-active Australian ants have three spectrally distinct photoreceptor types, potentially supporting trichromatic colour vision. Electroretinogram recordings show the presence of three spectral sensitivities with peaks (λmax) at 370, 450 and 550 nm in the night-active Myrmecia vindex and peaks at 370, 470 and 510 nm in the day-active Myrmecia croslandi. Intracellular electrophysiology on individual photoreceptors confirmed that the night-active M. vindex has three spectral sensitivities with peaks (λmax) at 370, 430 and 550 nm. A large number of the intracellular recordings in the night-active M. vindex show unusually broad-band spectral sensitivities, suggesting that photoreceptors may be coupled. Spectral measurements at different temporal frequencies revealed that the ultraviolet receptors are comparatively slow. We discuss the adaptive significance and the probability of trichromacy in Myrmecia ants in the context of dim light vision and visual navigation. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. The Effects of Synthetic and Enhanced Vision Technologies for Lunar Landings

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Norman, Robert M.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III; Shelton, Kevin J.; Williams, Steven P.

    2009-01-01

    Eight pilots participated as test subjects in a fixed-based simulation experiment to evaluate advanced vision display technologies such as Enhanced Vision (EV) and Synthetic Vision (SV) for providing terrain imagery on flight displays in a Lunar Lander Vehicle. Subjects were asked to fly 20 approaches to the Apollo 15 lunar landing site with four different display concepts - Baseline (symbology only with no terrain imagery), EV only (terrain imagery from Forward Looking Infra Red, or FLIR, and LIght Detection and Ranging, or LIDAR, sensors), SV only (terrain imagery from onboard database), and Fused EV and SV concepts. As expected, manual landing performance was excellent (within a meter of landing site center) and not affected by the inclusion of EV or SV terrain imagery on the Lunar Lander flight displays. Subjective ratings revealed significant situation awareness improvements with the concepts employing EV and/or SV terrain imagery compared to the Baseline condition that had no terrain imagery. In addition, display concepts employing EV imagery (compared to the SV and Baseline concepts which had none) were significantly better for pilot detection of intentional but unannounced navigation failures since this imagery provided an intuitive and obvious visual methodology to monitor the validity of the navigation solution.

  3. An Integrated Vision-Based System for Spacecraft Attitude and Topology Determination for Formation Flight Missions

    NASA Technical Reports Server (NTRS)

    Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray

    2004-01-01

    With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.

  4. Semi-autonomous parking for enhanced safety and efficiency.

    DOT National Transportation Integrated Search

    2017-06-01

    This project focuses on the use of tools from a combination of computer vision and localization based navigation schemes to aid the process of efficient and safe parking of vehicles in high density parking spaces. The principles of collision avoidanc...

  5. Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption

    NASA Astrophysics Data System (ADS)

    Mathavaraj, S.; Pandiyan, R.; Padhi, R.

    2017-12-01

    A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.

  6. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  7. Self-contained image mapping of placental vasculature in 3D ultrasound-guided fetoscopy.

    PubMed

    Yang, Liangjing; Wang, Junchen; Ando, Takehiro; Kubota, Akihiro; Yamashita, Hiromasa; Sakuma, Ichiro; Chiba, Toshio; Kobayashi, Etsuko

    2016-09-01

    Surgical navigation technology directed at fetoscopic procedures is relatively underdeveloped compared with other forms of endoscopy. The narrow fetoscopic field of views and the vast vascular network on the placenta make examination and photocoagulation treatment of twin-to-twin transfusion syndrome challenging. Though ultrasonography is used for intraoperative guidance, its navigational ability is not fully exploited. This work aims to integrate 3D ultrasound imaging and endoscopic vision seamlessly for placental vasculature mapping through a self-contained framework without external navigational devices. This is achieved through development, integration, and experimentation of novel navigational modules. Firstly, a framework design that addresses the current limitations based on identified gaps is conceptualized. Secondly, integration of navigational modules including (1) ultrasound-based localization, (2) image alignment, and (3) vision-based tracking to update the scene texture map is implemented. This updated texture map is projected to an ultrasound-constructed 3D model for photorealistic texturing of the 3D scene creating a panoramic view of the moving fetoscope. In addition, a collaborative scheme for the integration of the modular workflow system is proposed to schedule updates in a systematic fashion. Finally, experiments are carried out to evaluate each modular variation and an integrated collaborative scheme of the framework. The modules and the collaborative scheme are evaluated through a series of phantom experiments with controlled trajectories for repeatability. The collaborative framework demonstrated the best accuracy (5.2 % RMS error) compared with all the three single-module variations during the experiment. Validation on an ex vivo monkey placenta shows visual continuity of the freehand fetoscopic panorama. The proposed developed collaborative framework and the evaluation study of the framework variations provide analytical insights for effective integration of ultrasonography and endoscopy. This contributes to the development of navigation techniques in fetoscopic procedures and can potentially be extended to other applications in intraoperative imaging.

  8. [Interest of non invasive navigation in total knee arthroplasty].

    PubMed

    Zorman, D; Leclercq, G; Cabanas, J Juanos; Jennart, H

    2015-01-01

    During surgery of total knee arthroplasty, we use a computerized non invasive navigation (Brainlab Victor Vision CT-free) to assess the accuracy of the bone cuts (navigation expresse). The purpose of this study is to evaluate non invasive navigation when a total knee arthroplasty is achieved by conventional instrumentation. The study is based on forty total knee arthroplasties. The accuracy of the tibial and distal femoral bone cuts, checked by non invasive navigation, is evaluated prospectively. In our clinical series, we have obtained, with the conventional instrumentation, a correction of the mechanical axis only in 90 % of cases (N = 36). With non invasive navigation, we improved the positioning of implants and obtained in all cases the desired axiometry in the frontal plane. Although operative time is increased by about 15 minutes, the non invasive navigation does not induce intraoperative or immediate postoperative complications. Despite the cost of this technology, we believe that the reliability of the procedure is enhanced by a simple and reproducible technique.

  9. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  10. Vision-Aided Context-Aware Framework for Personal Navigation Services

    NASA Astrophysics Data System (ADS)

    Saeedi, S.; Moussa, A.; El-Sheimy, N., , Dr.

    2012-07-01

    The ubiquity of mobile devices (such as smartphones and tablet-PCs) has encouraged the use of location-based services (LBS) that are relevant to the current location and context of a mobile user. The main challenge of LBS is to find a pervasive and accurate personal navigation system (PNS) in different situations of a mobile user. In this paper, we propose a method of personal navigation for pedestrians that allows a user to freely move in outdoor environments. This system aims at detection of the context information which is useful for improving personal navigation. The context information for a PNS consists of user activity modes (e.g. walking, stationary, driving, and etc.) and the mobile device orientation and placement with respect to the user. After detecting the context information, a low-cost integrated positioning algorithm has been employed to estimate pedestrian navigation parameters. The method is based on the integration of the relative user's motion (changes of velocity and heading angle) estimation based on the video image matching and absolute position information provided by GPS. A Kalman filter (KF) has been used to improve the navigation solution when the user is walking and the phone is in his/her hand. The Experimental results demonstrate the capabilities of this method for outdoor personal navigation systems.

  11. FLASH LIDAR Based Relative Navigation

    NASA Technical Reports Server (NTRS)

    Brazzel, Jack; Clark, Fred; Milenkovic, Zoran

    2014-01-01

    Relative navigation remains the most challenging part of spacecraft rendezvous and docking. In recent years, flash LIDARs, have been increasingly selected as the go-to sensors for proximity operations and docking. Flash LIDARS are generally lighter and require less power that scanning Lidars. Flash LIDARs do not have moving parts, and they are capable of tracking multiple targets as well as generating a 3D map of a given target. However, there are some significant drawbacks of Flash Lidars that must be resolved if their use is to be of long-term significance. Overcoming the challenges of Flash LIDARs for navigation-namely, low technology readiness level, lack of historical performance data, target identification, existence of false positives, and performance of vision processing algorithms as intermediaries between the raw sensor data and the Kalman filter-requires a world-class testing facility, such as the Lockheed Martin Space Operations Simulation Center (SOSC). Ground-based testing is a critical step for maturing the next-generation flash LIDAR-based spacecraft relative navigation. This paper will focus on the tests of an integrated relative navigation system conducted at the SOSC in January 2014. The intent of the tests was to characterize and then improve the performance of relative navigation, while addressing many of the flash LIDAR challenges mentioned above. A section on navigation performance and future recommendation completes the discussion.

  12. Combining path integration and remembered landmarks when navigating without vision.

    PubMed

    Kalia, Amy A; Schrater, Paul R; Legge, Gordon E

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.

  13. Combining Path Integration and Remembered Landmarks When Navigating without Vision

    PubMed Central

    Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742

  14. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  15. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †

    PubMed Central

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-01-01

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624

  16. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    PubMed

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  17. Machine vision and appearance based learning

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-03-01

    Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.

  18. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  19. Open-Loop Flight Testing of COBALT GN&C Technologies for Precise Soft Landing

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Restrepo, Carolina I.

    2017-01-01

    A terrestrial, open-loop (OL) flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed, with support through the NASA Advanced Exploration Systems (AES), Game Changing Development (GCD), and Flight Opportunities (FO) Programs. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuzes the NDL and LVS data in real time to produce a precise navigation solution that is independent of the Global Positioning System (GPS) and suitable for future, autonomous planetary landing systems. The OL campaign tested COBALT as a passive payload, with COBALT data collection and filter execution, but with the Xodiac vehicle Guidance and Control (G&C) loops closed on a Masten GPS-based navigation solution. The OL test was performed as a risk reduction activity in preparation for an upcoming 2017 closed-loop (CL) flight campaign in which Xodiac G&C will act on the COBALT navigation solution and the GPS-based navigation will serve only as a backup monitor.

  20. Covariance Analysis of Vision Aided Navigation by Bootstrapping

    DTIC Science & Technology

    2012-03-22

    vision aided navigation. The aircraft uses its INS estimate to geolocate ground features, track those features to aid the INS, and using that aided...development of the 2-D case, including the dynamics and measurement model development, the state space representation and the use of the Kalman filter ...reference frame. This reference frame has its origin located somewhere on an A/C. Normally the origin is set at the A/C center of gravity to allow the use

  1. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    NASA Astrophysics Data System (ADS)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  2. Computer-aided system for detecting runway incursions

    NASA Astrophysics Data System (ADS)

    Sridhar, Banavar; Chatterji, Gano B.

    1994-07-01

    A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.

  3. 46 CFR 92.03-1 - Navigation bridge visibility.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... after September 7, 1990, must meet the following requirements: (a) The field of vision from the... obstruction must not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends... paragraph (a)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at...

  4. Air traffic management system design using satellite based geo-positioning and communications assets

    NASA Technical Reports Server (NTRS)

    Horkin, Phil

    1995-01-01

    The current FAA and ICAO FANS vision of Air Traffic Management will transition the functions of Communications, Navigation, and Surveillance to satellite based assets in the 21st century. Fundamental to widespread acceptance of this vision is a geo-positioning system that can provide worldwide access with best case differential GPS performance, but without the associated problems. A robust communications capability linking-up aircraft and towers to meet the voice and data requirements is also essential. The current GPS constellation does not provide continuous global coverage with a sufficient number of satellites to meet the precision landing requirements as set by the world community. Periodic loss of the minimum number of satellites in view creates an integrity problem, which prevents GPS from becoming the primary system for navigation. Furthermore, there is reluctance on the part of many countries to depend on assets like GPS and GLONASS which are controlled by military communities. This paper addresses these concerns and provides a system solving the key issues associated with navigation, automatic dependent surveillance, and flexible communications. It contains an independent GPS-like navigation system with 27 satellites providing global coverage with a minimum of six in view at all times. Robust communications is provided by a network of TDMA/FDMA communications payloads contained on these satellites. This network can support simultaneous communications for up to 30,000 links, nearly enough to simultaneously support three times the current global fleet of jumbo air passenger aircraft. All of the required hardware is directly traceable to existing designs.

  5. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  6. Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation

    DTIC Science & Technology

    2012-01-01

    consider their applica- tion to SLAM . The work of [31] [32] develops a spline-based SLAM framework, but this is only for application to LIDAR -based SLAM ...Existing approaches to visual Simultaneous Localization and Mapping ( SLAM ) typically utilize points as visual feature primitives to represent landmarks...regions of interest. Further, previous SLAM techniques that propose the use of higher level structures often place constraints on the environment, such as

  7. Model-Based Control using Model and Mechanization Fusion Techniques for Image-Aided Navigation

    DTIC Science & Technology

    2009-03-01

    Magnet Motors . Magna Physics Publishing, Hillsboro, OH, 1994. 7. Houwu Bai, Xubo Song, Eric Wan and Andriy Myronenko. “Vision-only Navi- gation and...filter”. Proceedings of the Recent Advances in Space Technologies (RAST). Nov 2003. 6. Hendershot, J.R. and Tje Miller. Design of Brushless Permanent

  8. Nonlinearity analysis of measurement model for vision-based optical navigation system

    NASA Astrophysics Data System (ADS)

    Li, Jianguo; Cui, Hutao; Tian, Yang

    2015-02-01

    In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.

  9. Vision-Aided Inertial Navigation

    NASA Technical Reports Server (NTRS)

    Roumeliotis, Stergios I. (Inventor); Mourikis, Anastasios I. (Inventor)

    2017-01-01

    This document discloses, among other things, a system and method for implementing an algorithm to determine pose, velocity, acceleration or other navigation information using feature tracking data. The algorithm has computational complexity that is linear with the number of features tracked.

  10. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  11. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  12. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  13. Distant touch hydrodynamic imaging with an artificial lateral line.

    PubMed

    Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang

    2006-12-12

    Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.

  14. Autonomous navigation and control of a Mars rover

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.

    1990-01-01

    A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.

  15. A rotorcraft flight database for validation of vision-based ranging algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1992-01-01

    A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.

  16. Audible vision for the blind and visually impaired in indoor open spaces.

    PubMed

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  17. Libration Point Navigation Concepts Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Quinn, David A.

    2004-01-01

    This work examines the autonomous navigation accuracy achievable for a lunar exploration trajectory from a translunar libration point lunar navigation relay satellite, augmented by signals from the Global Positioning System (GPS). We also provide a brief analysis comparing the libration point relay to lunar orbit relay architectures, and discuss some issues of GPS usage for cis-lunar trajectories.

  18. "It Just Got Real": Navigating the Affordances and Constraints of School-Based Learning in a Mathematics-Specific Induction Program

    ERIC Educational Resources Information Center

    Ticknor, Anne Swenson; Schwartz, Catherine Stein

    2017-01-01

    As beginning teachers encounter their first classrooms, they struggle to enact curriculum and negotiate expectations of local context with their visions of "good teaching." This article is a qualitative research design utilizing interview data and narrative analysis to examine the storied experiences of beginning teacher participants…

  19. Autonomous vision-based navigation for proximity operations around binary asteroids

    NASA Astrophysics Data System (ADS)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-02-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  20. Autonomous vision-based navigation for proximity operations around binary asteroids

    NASA Astrophysics Data System (ADS)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-06-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  1. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  2. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  3. Position Accuracy Analysis of a Robust Vision-Based Navigation

    NASA Astrophysics Data System (ADS)

    Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.

    2018-05-01

    Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.

  4. Flight data acquisition methodology for validation of passive ranging algorithms for obstacle avoidance

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1990-01-01

    The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.

  5. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  6. Machine-Vision Aids for Improved Flight Operations

    NASA Technical Reports Server (NTRS)

    Menon, P. K.; Chatterji, Gano B.

    1996-01-01

    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.

  7. Image-based ranging and guidance for rotorcraft

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1991-01-01

    This report documents the research carried out under NASA Cooperative Agreement No. NCC2-575 during the period Oct. 1988 - Dec. 1991. Primary emphasis of this effort was on the development of vision based navigation methods for rotorcraft nap-of-the-earth flight regime. A family of field-based ranging algorithms were developed during this research period. These ranging schemes are capable of handling both stereo and motion image sequences, and permits both translational and rotational camera motion. The algorithms require minimal computational effort and appear to be implementable in real time. A series of papers were presented on these ranging schemes, some of which are included in this report. A small part of the research effort was expended on synthesizing a rotorcraft guidance law that directly uses the vision-based ranging data. This work is discussed in the last section.

  8. Should Animals Navigating Over Short Distances Switch to a Magnetic Compass Sense?

    PubMed Central

    Wyeth, Russell C.

    2010-01-01

    Magnetoreception can play a substantial role in long distance navigation by animals. I hypothesize that locomotion guided by a magnetic compass sense could also play a role in short distance navigation. Animals identify mates, prey, or other short distance navigational goals using different sensory modalities (olfaction, vision, audition, etc.) to detect sensory cues associated with those goals. In conditions where these cues become unreliable for navigation (due to flow changes, obstructions, noise interference, etc.), switching to a magnetic compass sense to guide locomotion toward the navigational goals could be beneficial. Using simulations based on known locomotory and flow parameters, I show this strategy has strong theoretical benefits for the nudibranch mollusk Tritonia diomedea navigating toward odor sources in variable flow. A number of other animals may garner similar benefits, particularly slow-moving species in environments with rapidly changing cues relevant for navigation. Faster animals might also benefit from switching to a magnetic compass sense, provided the initial cues used for navigation (acoustic signals, odors, etc.) are intermittent or change rapidly enough that the entire navigation behavior cannot be guided by a continuously detectable cue. Examination of the relative durations of navigational tasks, the persistence of navigational cues, and the stability of both navigators and navigational targets will identify candidates with the appropriate combination of unreliable initial cues and relatively immobile navigational goals for which this hypothetical behavior could be beneficial. Magnetic manipulations can then test whether a switch to a magnetic compass sense occurs. This hypothesis thus provides an alternative when considering the behavioral significance of a magnetic compass sense in animals. PMID:20740070

  9. Learning Long-Range Vision for an Offroad Robot

    DTIC Science & Technology

    2008-09-01

    robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a

  10. Drift-Free Indoor Navigation Using Simultaneous Localization and Mapping of the Ambient Heterogeneous Magnetic Field

    NASA Astrophysics Data System (ADS)

    Chow, J. C. K.

    2017-09-01

    In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions are required for the general motion of the sensor (e.g. static periods).

  11. A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals

    PubMed Central

    Hicks, Stephen L.; Wilson, Iain; Muhammed, Louwai; Worsfold, John; Downes, Susan M.; Kennard, Christopher

    2013-01-01

    Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals. PMID:23844067

  12. Exploration, anxiety, and spatial memory in transgenic anophthalmic mice.

    PubMed

    Buhot, M C; Dubayle, D; Malleret, G; Javerzat, S; Segu, L

    2001-04-01

    Contradictory results are found in the literature concerning the role of vision in the perception of space or in spatial navigation, in part because of the lack of murine models of total blindness used so far. The authors evaluated the spatial abilities of anophthalmic transgenic mice. These mice did not differ qualitatively from their wild-type littermates in general locomotor activity, spontaneous alternation, object exploration, or anxiety, but their level of exploratory activity was generally lower. In the spatial version of the water maze, they displayed persistent thigmotaxic behavior and showed severe spatial learning impairments. However, their performances improved with training, suggesting that they may have acquired a rough representation of the platform position. These results suggest that modalities other than vision enable some degree of spatial processing in proximal and structured spaces but that vision is critical for accurate spatial navigation.

  13. Polarization Imaging and Insect Vision

    ERIC Educational Resources Information Center

    Green, Adam S.; Ohmann, Paul R.; Leininger, Nick E.; Kavanaugh, James A.

    2010-01-01

    For several years we have included discussions about insect vision in the optics units of our introductory physics courses. This topic is a natural extension of demonstrations involving Brewster's reflection and Rayleigh scattering of polarized light because many insects heavily rely on optical polarization for navigation and communication.…

  14. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  15. [Spectral navigation technology and its application in positioning the fruits of fruit trees].

    PubMed

    Yu, Xiao-Lei; Zhao, Zhi-Min

    2010-03-01

    An innovative technology of spectral navigation is presented in the present paper. This new method adopts reflectance spectra of fruits, leaves and branches as one of the key navigation parameters and positions the fruits of fruit trees relying on the diversity of spectral characteristics. The research results show that the distinct smoothness as effect is available in the spectrum of leaves of fruit trees. On the other hand, gradual increasing as the trend is an important feature in the spectrum of branches of fruit trees while the spectrum of fruit fluctuates. In addition, the peak diversity of reflectance rate between fruits and leaves of fruit trees is reached at 850 nm of wavelength. So the limit value can be designed at this wavelength in order to distinguish fruits and leaves. The method introduced here can not only quickly distinguish fruits, leaves and branches, but also avoid the effects of surroundings. Compared with the traditional navigation systems based on machine vision, there are still some special and unique features in the field of positioning the fruits of fruit trees using spectral navigation technology.

  16. [Impairment of safety in navigation caused by alcohol: impact on visual function].

    PubMed

    Grütters, G; Reichelt, J A; Ritz-Timme, S; Thome, M; Kaatsch, H J

    2003-05-01

    So far in Germany, no legally binding standards for blood alcohol concentration exist that prove an impairment of navigability. The aim of our interdisciplinary project was to obtain data in order to identify critical blood alcohol limits. In this context the visual system seems to be of decisive importance. 21 professional skippers underwent realistic navigational demands soberly and alcoholized in a sea traffic simulator. The following parameters were considered: visual acuity, stereopsis, color vision, and accommodation. Under the influence of alcohol (average blood alcohol concentration: 1.08 per thousand ) each skipper considered himself to be completely capable of navigating. While simulations were running, all of the skippers made nautical mistakes or underestimated dangerous situations. Severe impairment in visual acuity or binocular function were not observed. Accommodation decreased by an average of 18% ( p=0.0001). In the test of color vision skippers made more mistakes ( p=0.017) and the time needed for this test was prolonged ( p=0.004). Changes in visual function as well as vegetative and psychological reactions could be the cause of mistakes and alcohol should therefore be regarded as a severe risk factor for security in sea navigation.

  17. Vision for navigation: What can we learn from ants?

    PubMed

    Graham, Paul; Philippides, Andrew

    2017-09-01

    The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. Indoor Navigation by People with Visual Impairment Using a Digital Sign System

    PubMed Central

    Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156

  19. Assistive obstacle detection and navigation devices for vision-impaired users.

    PubMed

    Ong, S K; Zhang, J; Nee, A Y C

    2013-09-01

    Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.

  20. Vision-Based Navigation and Parallel Computing

    DTIC Science & Technology

    1990-08-01

    33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space

  1. A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

    PubMed Central

    Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.

    2015-01-01

    For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135

  2. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    PubMed

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  3. A spatial registration method for navigation system combining O-arm with spinal surgery robot

    NASA Astrophysics Data System (ADS)

    Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.

    2018-05-01

    The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.

  4. Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1989-09-01

    The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.

  5. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    PubMed

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  6. Landmark-aided localization for air vehicles using learned object detectors

    NASA Astrophysics Data System (ADS)

    DeAngelo, Mark Patrick

    This research presents two methods to localize an aircraft without GPS using fixed landmarks observed from an optical sensor. Onboard absolute localization is useful for vehicle navigation free from an external network. The objective is to achieve practical navigation performance using available autopilot hardware and a downward pointing camera. The first method uses computer vision cascade object detectors, which are trained to detect predetermined, distinct landmarks prior to a flight. The first method also concurrently explores aircraft localization using roads between landmark updates. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement updates when landmarks are detected. The sensor measurements and landmark coordinates extracted from the aircraft's camera images are combined into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities. The second method uses computer vision object detectors to detect abundant generic landmarks referred as buildings, fields, trees, and road intersections from aerial perspectives. Various landmark attributes and spatial relationships to other landmarks are used to help associate observed landmarks with reference landmarks. The computer vision algorithms automatically extract reference landmarks from maps, which are processed offline before a flight. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement corrections by processing aerial photos with similar generic landmark detection techniques. The method also combines sensor measurements and landmark coordinates into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities.

  7. Shape Perception and Navigation in Blind Adults

    PubMed Central

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  8. A Bionic Polarization Navigation Sensor and Its Calibration Method.

    PubMed

    Zhao, Huijie; Xu, Wujian

    2016-08-03

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.

  9. A Bionic Polarization Navigation Sensor and Its Calibration Method

    PubMed Central

    Zhao, Huijie; Xu, Wujian

    2016-01-01

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects’ polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor’s signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation. PMID:27527171

  10. Experimental Semiautonomous Vehicle

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Mishkin, Andrew H.; Litwin, Todd E.; Matthies, Larry H.; Cooper, Brian K.; Nguyen, Tam T.; Gat, Erann; Gennery, Donald B.; Firby, Robert J.; Miller, David P.; hide

    1993-01-01

    Semiautonomous rover vehicle serves as testbed for evaluation of navigation and obstacle-avoidance techniques. Designed to traverse variety of terrains. Concepts developed applicable to robots for service in dangerous environments as well as to robots for exploration of remote planets. Called Robby, vehicle 4 m long and 2 m wide, with six 1-m-diameter wheels. Mass of 1,200 kg and surmounts obstacles as large as 1 1/2 m. Optimized for development of machine-vision-based strategies and equipped with complement of vision and direction sensors and image-processing computers. Front and rear cabs steer and roll with respect to centerline of vehicle. Vehicle also pivots about central axle, so wheels comply with almost any terrain.

  11. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  12. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    DTIC Science & Technology

    2015-03-01

    SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new

  13. Target Acquisition for Projectile Vision-Based Navigation

    DTIC Science & Technology

    2014-03-01

    Future Work 20 8. References 21 Appendix A. Simulation Results 23 Appendix B. Derivation of Ground Resolution for a Diffraction-Limited Pinhole Camera...results for visual acquisition (left) and target recognition (right). ..........19 Figure B-1. Differential object and image areas for pinhole camera...projectile and target (measured in terms of the angle ) will depend on target heading. In particular, because we have aligned the x axis along the

  14. Improving Real World Performance of Vision Aided Navigation in a Flight Environment

    DTIC Science & Technology

    2016-09-15

    Introduction . . . . . . . 63 4.2 Wide Area Search Extent . . . . . . . . . . . . . . . . . 64 4.3 Large-Scale Image Navigation Histogram Filter ...65 4.3.1 Location Model . . . . . . . . . . . . . . . . . . 66 4.3.2 Measurement Model . . . . . . . . . . . . . . . 66 4.3.3 Histogram Filter ...Iteration of Histogram Filter . . . . . . . . . . . 70 4.4 Implementation and Flight Test Campaign . . . . . . . . 71 4.4.1 Software Implementation

  15. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  16. Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.

    PubMed

    Deng, Fucheng; Zhu, Xiaorui; He, Chao

    2017-09-13

    Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.

  17. The Role of X-Rays in Future Space Navigation and Communication

    NASA Technical Reports Server (NTRS)

    Winternitz, Luke M. B.; Gendreau, Keith C.; Hasouneh, Monther A.; Mitchell, Jason W.; Fong, Wai H.; Lee, Wing-Tsz; Gavriil, Fotis; Arzoumanian, Zaven

    2013-01-01

    In the near future, applications using X-rays will enable autonomous navigation and time distribution throughout the solar system, high capacity and low-power space data links, highly accurate attitude sensing, and extremely high-precision formation flying capabilities. Each of these applications alone has the potential to revolutionize mission capabilities, particularly beyond Earth orbit. This paper will outline the NASA Goddard Space Flight Center vision and efforts toward realizing the full potential of X-ray navigation and communications.

  18. Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.

    PubMed

    Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha

    2018-06-01

    Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.

  19. It's not black or white—on the range of vision and echolocation in echolocating bats

    PubMed Central

    Boonman, Arjan; Bar-On, Yinon; Cvikel, Noam; Yovel, Yossi

    2013-01-01

    Around 1000 species of bats in the world use echolocation to navigate, orient, and detect insect prey. Many of these bats emerge from their roost at dusk and start foraging when there is still light available. It is however unclear in what way and to which extent navigation, or even prey detection in these bats is aided by vision. Here we compare the echolocation and visual detection ranges of two such species of bats which rely on different foraging strategies (Rhinopoma microphyllum and Pipistrellus kuhlii). We find that echolocation is better than vision for detecting small insects even in intermediate light levels (1–10 lux), while vision is advantageous for monitoring far-away landscape elements in both species. We thus hypothesize that, bats constantly integrate information acquired by the two sensory modalities. We suggest that during evolution, echolocation was refined to detect increasingly small targets in conjunction with using vision. To do so, the ability to hear ultrasonic sound is a prerequisite which was readily available in small mammals, but absent in many other animal groups. The ability to exploit ultrasound to detect very small targets, such as insects, has opened up a large nocturnal niche to bats and may have spurred diversification in both echolocation and foraging tactics. PMID:24065924

  20. Activation of the Hippocampal Complex during Tactile Maze Solving in Congenitally Blind Subjects

    ERIC Educational Resources Information Center

    Gagnon, Lea; Schneider, Fabien C.; Siebner, Hartwig R.; Paulson, Olaf B.; Kupers, Ron; Ptito, Maurice

    2012-01-01

    Despite their lack of vision, congenitally blind subjects are able to build and manipulate cognitive maps for spatial navigation. It is assumed that they thereby rely more heavily on echolocation, proprioceptive signals and environmental cues such as ambient temperature and audition to compensate for their lack of vision. Little is known, however,…

  1. Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry.

    PubMed

    Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas

    2013-08-01

    This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.

  2. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  3. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  4. Rehabilitation regimes based upon psychophysical studies of prosthetic vision

    NASA Astrophysics Data System (ADS)

    Chen, S. C.; Suaning, G. J.; Morley, J. W.; Lovell, N. H.

    2009-06-01

    Human trials of prototype visual prostheses have successfully elicited visual percepts (phosphenes) in the visual field of implant recipients blinded through retinitis pigmentosa and age-related macular degeneration. Researchers are progressing rapidly towards a device that utilizes individual phosphenes as the elementary building blocks to compose a visual scene. This form of prosthetic vision is expected, in the near term, to have low resolution, large inter-phosphene gaps, distorted spatial distribution of phosphenes, restricted field of view, an eccentrically located phosphene field and limited number of expressible luminance levels. In order to fully realize the potential of these devices, there needs to be a training and rehabilitation program which aims to assist the prosthesis recipients to understand what they are seeing, and also to adapt their viewing habits to optimize the performance of the device. Based on the literature of psychophysical studies in simulated and real prosthetic vision, this paper proposes a comprehensive, theoretical training regime for a prosthesis recipient: visual search, visual acuity, reading, face/object recognition, hand-eye coordination and navigation. The aim of these tasks is to train the recipients to conduct visual scanning, eccentric viewing and reading, discerning low-contrast visual information, and coordinating bodily actions for visual-guided tasks under prosthetic vision. These skills have been identified as playing an important role in making prosthetic vision functional for the daily activities of their recipients.

  5. Integrated cockpit design for the Army helicopter improvement program

    NASA Technical Reports Server (NTRS)

    Drennen, T.; Bowen, B.

    1984-01-01

    The main Army Helicopter Improvement Program (AHIP) mission is to navigate precisely, locate targets accurately, communicate their position to other battlefield elements, and to designate them for laser guided weapons. The onboard navigation and mast-mounted sight (MMS) avionics enable accurate tracking of current aircraft position and subsequent target location. The AHIP crewstation development was based on extensive mission/task analysis, function allocation, total system design, and test and verification. The avionics requirements to meet the mission was limited by the existing aircraft structural and performance characteristics and resultant space, weight, and power restrictions. These limitations and night operations requirement led to the use of night vision goggles. The combination of these requirements and limitations dictated an integrated control/display approach using multifunction displays and controls.

  6. Capturing the Sun: A Roadmap for Navigating Data-Access Challenges and Auto-Populating Solar Home Sales Listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stukel, Laura; Hoen, Ben; Adomatis, Sandra

    Capturing the Sun: A Roadmap for Navigating Data-Access Challenges and Auto-Populating Solar Home Sales Listings supports a vision of solar photovoltaic (PV) advocates and real estate advocates evolving together to make information about solar homes more accessible to home buyers and sellers and to simplify the process when these homes are resold. The Roadmap is based on a concept in the real estate industry known as automatic population of fields. Auto-population (also called auto-pop in the industry) is the technology that allows data aggregated by an outside industry to be matched automatically with home sale listings in a multiple listingmore » service (MLS).« less

  7. Neurally and Ocularly Informed Graph-Based Models for Searching 3D Environments

    DTIC Science & Technology

    2014-06-03

    hBCI = hybrid brain–computer interface, TAG = transductive annotation by graph, CV = computer vision, TSP = traveling salesman problem . are navigated...environment that are most likely to contain objects that the subject would like to visit. 2.9. Route planning A traveling salesman problem (TSP) solver...fixations in a visual search task using fixation-related potentials J. Vis. 13 Croes G 1958 A method for solving traveling - salesman problems Oper. Res

  8. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  9. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor

    PubMed Central

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-01-01

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775

  10. Design and control of an embedded vision guided robotic fish with multiple control surfaces.

    PubMed

    Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.

  11. Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces

    PubMed Central

    Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413

  12. [Birds' sense of direction].

    PubMed

    Hohtola, Esa

    2016-01-01

    Birds utilize several distinct sensory systems in a flexible manner in their navigation. When navigating with the help of landmarks, location of the sun and stars, or polarization image of the dome of the sky, they resort to vision. The significance of olfaction in long-range navigation has been under debate, even though its significance in local orientation is well documented. The hearing in birds extends to the infrasound region. It has been assumed that they are able to hear the infrasounds generated in the mountains and seaside and navigate by using them. Of the senses of birds, the most exotic one is the ability to sense magnetic fields of the earth.

  13. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles

    PubMed Central

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-01-01

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional–integral–derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle. PMID:27110793

  14. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    PubMed

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  15. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  16. Survey of computer vision technology for UVA navigation

    NASA Astrophysics Data System (ADS)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  17. Data Analysis Techniques for a Lunar Surface Navigation System Testbed

    NASA Technical Reports Server (NTRS)

    Chelmins, David; Sands, O. Scott; Swank, Aaron

    2011-01-01

    NASA is interested in finding new methods of surface navigation to allow astronauts to navigate on the lunar surface. In support of the Vision for Space Exploration, the NASA Glenn Research Center developed the Lunar Extra-Vehicular Activity Crewmember Location Determination System and performed testing at the Desert Research and Technology Studies event in 2009. A significant amount of sensor data was recorded during nine tests performed with six test subjects. This paper provides the procedure, formulas, and techniques for data analysis, as well as commentary on applications.

  18. Searching Lost People with Uavs: the System and Results of the Close-Search Project

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Skaloud, J.; Kornus, W.; Prades, R.; Aguilera, C.

    2012-07-01

    This paper will introduce the goals, concept and results of the project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost Search-And-Rescue (SAR) operations'. The main goal is to integrate a medium-size, helicopter-type Unmanned Aerial Vehicle (UAV), a thermal imaging sensor and an EGNOS-based multi-sensor navigation system, including an Autonomous Integrity Monitoring (AIM) capability, to support search operations in difficult-to-access areas and/or night operations. The focus of the paper is three-fold. Firstly, the operational and technical challenges of the proposed approach are discussed, such as ultra-safe multi-sensor navigation system, the use of combined thermal and optical vision (infrared plus visible) for person recognition and Beyond-Line-Of-Sight communications among others. Secondly, the implementation of the integrity concept for UAV platforms is discussed herein through the AIM approach. Based on the potential of the geodetic quality analysis and on the use of the European EGNOS system as a navigation performance starting point, AIM approaches integrity from the precision standpoint; that is, the derivation of Horizontal and Vertical Protection Levels (HPLs, VPLs) from a realistic precision estimation of the position parameters is performed and compared to predefined Alert Limits (ALs). Finally, some results from the project test campaigns are described to report on particular project achievements. Together with actual Search-and-Rescue teams, the system was operated in realistic, user-chosen test scenarios. In this context, and specially focusing on the EGNOS-based UAV navigation, the AIM capability and also the RGB/thermal imaging subsystem, a summary of the results is presented.

  19. Vision Assisted Navigation for Miniature Unmanned Aerial Vehicles (MAVs)

    DTIC Science & Technology

    2009-11-01

    commanded to orbit a target of known location. The error in target geolocation is shown for 200 frames with filtering (dashed line) and without (solid...so the performance of the filter was determined by using the estimated poses to solve a geolocation problem. An MAV flying at an altitude of 70 meters... geolocation as well as significantly reducing the short-term variance in the estimates based on the GPS/IMU alone. Due to the nature of the autopilot

  20. Computer-assisted surgery: virtual- and augmented-reality displays for navigation during urological interventions.

    PubMed

    van Oosterom, Matthias N; van der Poel, Henk G; Navab, Nassir; van de Velde, Cornelis J H; van Leeuwen, Fijs W B

    2018-03-01

    To provide an overview of the developments made for virtual- and augmented-reality navigation procedures in urological interventions/surgery. Navigation efforts have demonstrated potential in the field of urology by supporting guidance for various disorders. The navigation approaches differ between the individual indications, but seem interchangeable to a certain extent. An increasing number of pre- and intra-operative imaging modalities has been used to create detailed surgical roadmaps, namely: (cone-beam) computed tomography, MRI, ultrasound, and single-photon emission computed tomography. Registration of these surgical roadmaps with the real-life surgical view has occurred in different forms (e.g. electromagnetic, mechanical, vision, or near-infrared optical-based), whereby the combination of approaches was suggested to provide superior outcome. Soft-tissue deformations demand the use of confirmatory interventional (imaging) modalities. This has resulted in the introduction of new intraoperative modalities such as drop-in US, transurethral US, (drop-in) gamma probes and fluorescence cameras. These noninvasive modalities provide an alternative to invasive technologies that expose the patients to X-ray doses. Whereas some reports have indicated navigation setups provide equal or better results than conventional approaches, most trials have been performed in relatively small patient groups and clear follow-up data are missing. The reported computer-assisted surgery research concepts provide a glimpse in to the future application of navigation technologies in the field of urology.

  1. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    PubMed

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  2. Towards an assistive peripheral visual prosthesis for long-term treatment of retinitis pigmentosa: evaluating mobility performance in immersive simulations

    NASA Astrophysics Data System (ADS)

    Zapf, Marc Patrick H.; Boon, Mei-Ying; Matteucci, Paul B.; Lovell, Nigel H.; Suaning, Gregg J.

    2015-06-01

    Objective. The prospective efficacy of a future peripheral retinal prosthesis complementing residual vision to raise mobility performance in non-end stage retinitis pigmentosa (RP) was evaluated using simulated prosthetic vision (SPV). Approach. Normally sighted volunteers were fitted with a wide-angle head-mounted display and carried out mobility tasks in photorealistic virtual pedestrian scenarios. Circumvention of low-lying obstacles, path following, and navigating around static and moving pedestrians were performed either with central simulated residual vision of 10° alone or enhanced by assistive SPV in the lower and lateral peripheral visual field (VF). Three layouts of assistive vision corresponding to hypothetical electrode array layouts were compared, emphasizing higher visual acuity, a wider visual angle, or eccentricity-dependent acuity across an intermediate angle. Movement speed, task time, distance walked and collisions with the environment were analysed as performance measures. Main results. Circumvention of low-lying obstacles was improved with all tested configurations of assistive SPV. Higher-acuity assistive vision allowed for greatest improvement in walking speeds—14% above that of plain residual vision, while only wide-angle and eccentricity-dependent vision significantly reduced the number of collisions—both by 21%. Navigating around pedestrians, there were significant reductions in collisions with static pedestrians by 33% and task time by 7.7% with the higher-acuity layout. Following a path, higher-acuity assistive vision increased walking speed by 9%, and decreased collisions with stationary cars by 18%. Significance. The ability of assistive peripheral prosthetic vision to improve mobility performance in persons with constricted VFs has been demonstrated. In a prospective peripheral visual prosthesis, electrode array designs need to be carefully tailored to the scope of tasks in which a device aims to assist. We posit that maximum benefit might come from application alongside existing visual aids, to further raise life quality of persons living through the prolonged early stages of RP.

  3. Proceedings of the Sixth Integrated Communications, Navigation and Surveillance (ICNS) Conference & Workshop 2006

    NASA Technical Reports Server (NTRS)

    Ponchak, Denise (Compiler)

    2006-01-01

    The Integrated Communications, Navigation and Surveillance (ICNS) Technologies Conference and Workshop provides a forum for government, industry, and academic communities performing research and technology development for advanced digital communications, navigation, and surveillance security systems and associated applications supporting the national and global air transportation systems. The event s goals are to understand current efforts and recent results in near- and far-term research and technology demonstration; identify integrated digital communications, navigation and surveillance research requirements necessary for a safe, high-capacity, advanced air transportation system; foster collaboration and coordination among all stakeholders; and discuss critical issues and develop recommendations to achieve the future integrated CNS vision for the national and global air transportation system.

  4. Proceedings of the Fourth Integrated Communications, Navigation, and Surveillance (ICNS) Conference and Workshop

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene (Compiler)

    2004-01-01

    The Integrated Communications, Navigational and Surveillance (ICNS) Technologies Conference and Workshop provides a forum for Government, industry, and academic communities performing research and technology development for advanced digital communications, navigation, and surveillance security systems and associated applications supporting the national and global air transportation systems. The event's goals are to understand current efforts and recent results in near-and far-term research and technology demonstration; identify integrated digital communications, navigation and surveillance research requirements necessary for a safe, high-capacity, advanced air transportation system; foster collaboration and coordination among all stakeholders; and discuss critical issues and develop recommendations to achieve the future integrated CNS vision for the national and global air transportation system.

  5. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  6. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  7. Cognitive mapping based on synthetic vision?

    NASA Astrophysics Data System (ADS)

    Helmetag, Arnd; Halbig, Christian; Kubbat, Wolfgang; Schmidt, Rainer

    1999-07-01

    The analysis of accidents focused our work on the avoidance of 'Controlled Flight Into Terrain' caused by insufficient situation awareness. Analysis of safety concepts led us to the design of the proposed synthetic vision system that will be described. Since most information on these 3D-Displays is shown in a graphical way, it can intuitively be understood by the pilot. What are the new possibilities using SVS enhancing situation awareness? First, detection of ground collision hazard is possible by monitoring a perspective Primary Flight Display. Under the psychological point of view it is based on the perception of expanding objects in the visual flow field. Supported by a Navigation Display a local conflict resolution can be mentally worked out very fast. Secondly, it is possible to follow a 3D flight path visualized as a 'Tunnel in the sky.' This can further be improved by using a flight path prediction. These are the prerequisites for a safe and adequate movement in any kind of spatial environment. However situation awareness requires the ability of navigation and spatial problem solving. Both abilities are based on higher cognitive functions in real as well as in a synthetic environment. In this paper the current training concept will be analyzed. Advantages resulting from the integration of a SVS concerning pilot training will be discussed and necessary requirements in terrain depiction will be pinpointed. Finally a modified Computer Based Training for the familiarization with Salzburg Airport for a SVS equipped aircraft will be presented. It is developed by Darmstadt University of Technology in co-operation with Lufthansa Flight Training.

  8. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    NASA Technical Reports Server (NTRS)

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  9. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Effects of a Velocity-Vector Based Command Augmentation System and Synthetic Vision System Terrain Portrayal and Guidance Symbology Concepts on Single-Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodrich, Kenneth H.; Peak, Bob

    2010-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on the performance of pilots flying a light, single-engine general aviation airplane. We evaluated the effects and interactions of two levels of terrain portrayal, guidance symbology, and flight control response type on pilot performance during the conduct of a relatively complex instrument approach procedure. The terrain and guidance presentations were evaluated as elements of an integrated primary flight display system. The approach procedure used in the study included a steeply descending, curved segment as might be encountered in emerging, required navigation performance (RNP) based procedures. Pilot performance measures consisted of flight technical performance, perceived workload, perceived situational awareness and subjective preference. The results revealed that an elevation based generic terrain portrayal significantly improved perceived situation awareness without adversely affecting flight technical performance or workload. Other factors (pilot instrument rating, control response type, and guidance symbology) were not found to significantly affect the performance measures.

  11. Bioelectronic retinal prosthesis

    NASA Astrophysics Data System (ADS)

    Weiland, James D.

    2016-05-01

    Retinal prosthesis have been translated to clinical use over the past two decades. Currently, two devices have regulatory approval for the treatment of retinitis pigmentosa and one device is in clinical trials for treatment of age-related macular degeneration. These devices provide partial sight restoration and patients use this improved vision in their everyday lives to navigate and to detect large objects. However, significant vision restoration will require both better technology and improved understanding of the interaction between electrical stimulation and the retina. In particular, current retinal prostheses do not provide peripheral visions due to technical and surgical limitations, thus limiting the effectiveness of the treatment. This paper reviews recent results from human implant patients and presents technical approaches for peripheral vision.

  12. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  13. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    PubMed

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  14. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  15. Cognitive load of navigating without vision when guided by virtual sound versus spatial language.

    PubMed

    Klatzky, Roberta L; Marston, James R; Giudice, Nicholas A; Golledge, Reginald G; Loomis, Jack M

    2006-12-01

    A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.

  16. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation

    PubMed Central

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-01-01

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780

  17. ALHAT COBALT: CoOperative Blending of Autonomous Landing Technology

    NASA Technical Reports Server (NTRS)

    Carson, John M.

    2015-01-01

    The COBALT project is a flight demonstration of two NASA ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) capabilities that are key for future robotic or human landing GN&C (Guidance, Navigation and Control) systems. The COBALT payload integrates the Navigation Doppler Lidar (NDL) for ultraprecise velocity and range measurements with the Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. Terrestrial flight tests of the COBALT payload in an open-loop and closed-loop GN&C configuration will be conducted onboard a commercial, rocket-propulsive Vertical Test Bed (VTB) at a test range in Mojave, CA.

  18. The Effects of Restricted Peripheral Field-of-View on Spatial Learning while Navigating.

    PubMed

    Barhorst-Cates, Erica M; Rand, Kristina M; Creem-Regehr, Sarah H

    2016-01-01

    Recent work with simulated reductions in visual acuity and contrast sensitivity has found decrements in survey spatial learning as well as increased attentional demands when navigating, compared to performance with normal vision. Given these findings, and previous work showing that peripheral field loss has been associated with impaired mobility and spatial memory for room-sized spaces, we investigated the role of peripheral vision during navigation using a large-scale spatial learning paradigm. First, we aimed to establish the magnitude of spatial memory errors at different levels of field restriction. Second, we tested the hypothesis that navigation under these different levels of restriction would use additional attentional resources. Normally sighted participants walked on novel real-world paths wearing goggles that restricted the field-of-view (FOV) to severe (15°, 10°, 4°, or 0°) or mild angles (60°) and then pointed to remembered target locations using a verbal reporting measure. They completed a concurrent auditory reaction time task throughout each path to measure cognitive load. Only the most severe restrictions (4° and blindfolded) showed impairment in pointing error compared to the mild restriction (within-subjects). The 10° and 4° conditions also showed an increase in reaction time on the secondary attention task, suggesting that navigating with these extreme peripheral field restrictions demands the use of limited cognitive resources. This comparison of different levels of field restriction suggests that although peripheral field loss requires the actor to use more attentional resources while navigating starting at a less extreme level (10°), spatial memory is not negatively affected until the restriction is very severe (4°). These results have implications for understanding of the mechanisms underlying spatial learning during navigation and the approaches that may be taken to develop assistance for navigation with visual impairment.

  19. IPS - a vision aided navigation system

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Baumbach, Dirk; Buder, Maximilian; Choinowski, Andre; Ernst, Ines; Funk, Eugen; Grießbach, Denis; Schischmanow, Adrian; Wohlfeil, Jürgen; Zuev, Sergey

    2017-04-01

    Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one's own position, can guidance be provided, inspections be executed, and autonomous vehicles be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on the human head and its navigation sensors - the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and an inertial measurement unit for determining an ego pose in six degrees of freedom in a local coordinate system. IPS is able to operate in real time and can be applied for indoor and outdoor scenarios without any external reference or prior knowledge. In this paper, the system and its key hardware and software components are introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The developer team started from scratch and transfers this technology into a commercial product right now. The paper finishes with an outlook.

  20. Mobile robot exploration and navigation of indoor spaces using sonar and vision

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth

    1994-01-01

    Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.

  1. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  2. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  3. Open-Loop Performance of COBALT Precision Landing Payload on a Commercial Sub-Orbital Rocket

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina I.; Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Lovelace, Ronney S.; McCarthy, Megan M.; Tse, Teming; Stelling, Richard; Collins, Steven M.

    2018-01-01

    An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a navigation solution that is independent of GPS and suitable for future, autonomous, planetary, landing systems. COBALT was a passive payload during the open loop tests. COBALT's sensors were actively taking data and processing it in real time, but the Xodiac rocket flew with its own GPS-navigation system as a risk reduction activity in the maturation of the technologies towards space flight. A future closed-loop test campaign is planned where the COBALT navigation solution will be used to fly its host vehicle.

  4. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  5. Testing and evaluation of a wearable augmented reality system for natural outdoor environments

    NASA Astrophysics Data System (ADS)

    Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg

    2013-05-01

    This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.

  6. Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.

    DTIC Science & Technology

    1985-07-01

    AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more

  7. COBALT CoOperative Blending of Autonomous Landing Technology

    NASA Technical Reports Server (NTRS)

    Carson, John M. III; Restrepo, Carolina I.; Robertson, Edward A.; Seubert, Carl R.; Amzajerdian, Farzin

    2016-01-01

    COBALT is a terrestrial test platform for development and maturation of GN&C (Guidance, Navigation and Control) technologies for PL&HA (Precision Landing and Hazard Avoidance). The project is developing a third generation, Langley Navigation Doppler Lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the JPL Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. These technologies together provide navigation that enables controlled precision landing. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive Vertical Test Bed (VTB) developed by Masten Space Systems (MSS), and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).

  8. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  9. Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System

    NASA Astrophysics Data System (ADS)

    Oh, Sung J.; Hall, Ernest L.

    1987-01-01

    Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.

  10. Evaluation of novel technologies for the miniaturization of flash imaging lidar

    NASA Astrophysics Data System (ADS)

    Mitev, V.; Pollini, A.; Haesler, J.; Perenzoni, D.; Stoppa, D.; Kolleck, Christian; Chapuy, M.; Kervendal, E.; Pereira do Carmo, João.

    2017-11-01

    Planetary exploration constitutes one of the main components in the European Space activities. Missions to Mars, Moon and asteroids are foreseen where it is assumed that the human missions shall be preceded by robotic exploitation flights. The 3D vision is recognised as a key enabling technology in the relative proximity navigation of the space crafts, where imaging LiDAR is one of the best candidates for such 3D vision sensor.

  11. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  12. Computer vision techniques for rotorcraft low-altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  13. Real-time adaptive off-road vehicle navigation and terrain classification

    NASA Astrophysics Data System (ADS)

    Muller, Urs A.; Jackel, Lawrence D.; LeCun, Yann; Flepp, Beat

    2013-05-01

    We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the au­tonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.

  14. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  15. Vision-Based Georeferencing of GPR in Urban Areas

    PubMed Central

    Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; Pagliari, Diana; Pinto, Livio

    2016-01-01

    Ground Penetrating Radar (GPR) surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS) device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based) method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles. PMID:26805842

  16. Use of Virtual Mission Operations Center Technology to Achieve JPDO's Virtual Tower Vision

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Paulsen, Phillip E.

    2006-01-01

    The Joint Program Development Office has proposed that the Next Generation Air Transportation System (NGATS) consolidate control centers. NGATS would be managed from a few strategically located facilities with virtual towers and TRACONS. This consolidation is about combining the delivery locations for these services not about decreasing service. By consolidating these locations, cost savings in the order of $500 million have been projected. Evolving to spaced-based communication, navigation, and surveillance offers the opportunity to reduce or eliminate much of the ground-based infrastructure cost. Dynamically adjusted airspace offers the opportunity to reduce the number of sectors and boundary inconsistencies; eliminate or reduce "handoffs;" and eliminate the distinction between Towers, TRACONS, and Enroute Centers. To realize a consolidation vision for air traffic management there must be investment in networking. One technology that holds great potential is the use of Virtual Mission Operations Centers to provide secure, automated, intelligent management of the NGATS. This paper provides a conceptual framework for incorporating VMOC into the NGATS.

  17. Patient-specific induced pluripotent stem cells (iPSCs) for the study and treatment of retinal degenerative diseases.

    PubMed

    Wiley, Luke A; Burnight, Erin R; Songstad, Allison E; Drack, Arlene V; Mullins, Robert F; Stone, Edwin M; Tucker, Budd A

    2015-01-01

    Vision is the sense that we use to navigate the world around us. Thus it is not surprising that blindness is one of people's most feared maladies. Heritable diseases of the retina, such as age-related macular degeneration and retinitis pigmentosa, are the leading cause of blindness in the developed world, collectively affecting as many as one-third of all people over the age of 75, to some degree. For decades, scientists have dreamed of preventing vision loss or of restoring the vision of patients affected with retinal degeneration through drug therapy, gene augmentation or a cell-based transplantation approach. In this review we will discuss the use of the induced pluripotent stem cell technology to model and develop various treatment modalities for the treatment of inherited retinal degenerative disease. We will focus on the use of iPSCs for interrogation of disease pathophysiology, analysis of drug and gene therapeutics and as a source of autologous cells for cell transplantation and replacement. Copyright © 2014. Published by Elsevier Ltd.

  18. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    NASA Astrophysics Data System (ADS)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  19. Vision Based Obstacle Detection in Uav Imaging

    NASA Astrophysics Data System (ADS)

    Badrloo, S.; Varshosaz, M.

    2017-08-01

    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  20. Visual navigation in starfish: first evidence for the use of vision and eyes in starfish

    PubMed Central

    Garm, Anders; Nilsson, Dan-Eric

    2014-01-01

    Most known starfish species possess a compound eye at the tip of each arm, which, except for the lack of true optics, resembles an arthropod compound eye. Although these compound eyes have been known for about two centuries, no visually guided behaviour has ever been directly associated with their presence. There are indications that they are involved in negative phototaxis but this may also be governed by extraocular photoreceptors. Here, we show that the eyes of the coral-reef-associated starfish Linckia laevigata are slow and colour blind. The eyes are capable of true image formation although with low spatial resolution. Further, our behavioural experiments reveal that only specimens with intact eyes can navigate back to their reef habitat when displaced, demonstrating that this is a visually guided behaviour. This is, to our knowledge, the first report of a function of starfish compound eyes. We also show that the spectral sensitivity optimizes the contrast between the reef and the open ocean. Our results provide an example of an eye supporting only low-resolution vision, which is believed to be an essential stage in eye evolution, preceding the high-resolution vision required for detecting prey, predators and conspecifics. PMID:24403344

  1. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    DTIC Science & Technology

    2015-03-26

    camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene

  2. Bioinspired optical sensors for unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Chahl, Javaan; Rosser, Kent; Mizutani, Akiko

    2011-04-01

    Insects are dependant on the spatial, spectral and temporal distributions of light in the environment for flight control and navigation. This paper reports on flight trials of implementations of insect inspired behaviors on unmanned aerial vehicles. Optical flow methods for maintaining a constant height above ground and a constant course have been demonstrated to provide navigation capabilities that are impossible using conventional avionics sensors. Precision control of height above ground and ground course were achieved over long distances. Other vision based techniques demonstrated include a biomimetic stabilization sensor that uses the ultraviolet and green bands of the spectrum, and a sky polarization compass. Both of these sensors were tested over long trajectories in different directions, in each case showing performance similar to low cost inertial heading and attitude systems. The behaviors demonstrate some of the core functionality found in the lower levels of the sensorimotor system of flying insects and shows promise for more integrated solutions in the future.

  3. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  4. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    NASA Astrophysics Data System (ADS)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  5. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  6. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  7. Vision based object pose estimation for mobile robots

    NASA Technical Reports Server (NTRS)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  8. Cobalt: Development and Maturation of GN&C Technologies for Precision Landing

    NASA Technical Reports Server (NTRS)

    Carson, John M.; Restrepo, Carolina; Seubert, Carl; Amzajerdian, Farzin

    2016-01-01

    The CoOperative Blending of Autonomous Landing Technologies (COBALT) instrument is a terrestrial test platform for development and maturation of guidance, navigation and control (GN&C) technologies for precision landing. The project is developing a third-generation Langley Research Center (LaRC) navigation doppler lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the Jet Propulsion Laboratory (JPL) lander vision system (LVS) for terrain relative navigation (TRN) position estimates. These technologies together provide precise navigation knowledge that is critical for a controlled and precise touchdown. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive vertical test bed (VTB) developed by Masten Space Systems, and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).

  9. New vision based navigation clue for a regular colonoscope's tip

    NASA Astrophysics Data System (ADS)

    Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy

    2009-02-01

    Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.

  10. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC Click on a model logo to go to its home page RTOFS Global RTOFS Global RTOFS Atlantic

  11. Integrated Communications, Navigation and Surveillance Technologies Keynote Address

    NASA Technical Reports Server (NTRS)

    Lebacqz, J. Victor

    2004-01-01

    Slides for the Keynote Address present graphics to enhance the discussion of NASA's vision, the National Space Exploration Initiative, current Mars exploration, and aeronautics exploration. The presentation also focuses on development of an Air Transportation System and transformation from present systems.

  12. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas

    PubMed Central

    2018-01-01

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230

  13. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.

    PubMed

    Gakne, Paul Verlaine; O'Keefe, Kyle

    2018-04-17

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.

  14. A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

    NASA Astrophysics Data System (ADS)

    Leishman, Robert C.

    Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizing a relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.

  15. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  16. Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.

    PubMed

    Liverence, Brandon M; Scholl, Brian J

    2015-07-01

    Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.

  17. Space Mobile Network: A Near Earth Communication and Navigation Architecture

    NASA Technical Reports Server (NTRS)

    Israel, Dave J.; Heckler, Greg; Menrad, Robert J.

    2016-01-01

    This paper describes a Space Mobile Network architecture, the result of a recently completed NASA study exploring architectural concepts to produce a vision for the future Near Earth communications and navigation systems. The Space Mobile Network (SMN) incorporates technologies, such as Disruption Tolerant Networking (DTN) and optical communications, and new operations concepts, such as User Initiated Services, to provide user services analogous to a terrestrial smartphone user. The paper will describe the SMN Architecture, envisioned future operations concepts, opportunities for industry and international collaboration and interoperability, and technology development areas and goals.

  18. COBALT Flight Demonstrations Fuse Technologies

    NASA Image and Video Library

    2017-06-07

    This 5-minute, 50-second video shows how the CoOperative Blending of Autonomous Landing Technologies (COBALT) system pairs new landing sensor technologies that promise to yield the highest precision navigation solution ever tested for NASA space landing applications. The technologies included a navigation doppler lidar (NDL), which provides ultra-precise velocity and line-of-sight range measurements, and the Lander Vision System (LVS), which provides terrain-relative navigation. Through flight campaigns conducted in March and April 2017 aboard Masten Space Systems' Xodiac, a rocket-powered vertical takeoff, vertical landing (VTVL) platform, the COBALT system was flight tested to collect sensor performance data for NDL and LVS and to check the integration and communication between COBALT and the rocket. The flight tests provided excellent performance data for both sensors, as well as valuable information on the integrated performance with the rocket that will be used for subsequent COBALT modifications prior to follow-on flight tests. Based at NASA’s Armstrong Flight Research Center in Edwards, CA, the Flight Opportunities program funds technology development flight tests on commercial suborbital space providers of which Masten is a vendor. The program has previously tested the LVS on the Masten rocket and validated the technology for the Mars 2020 rover.

  19. Application of parallelized software architecture to an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam

    2011-01-01

    This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.

  20. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  1. Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III

    2007-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.

  2. Progress in Insect-Inspired Optical Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Chahl, Javaan; Zometzer, Steve

    2005-01-01

    Progress has been made in continuing efforts to develop optical flight-control and navigation sensors for miniature robotic aircraft. The designs of these sensors are inspired by the designs and functions of the vision systems and brains of insects. Two types of sensors of particular interest are polarization compasses and ocellar horizon sensors. The basic principle of polarization compasses was described (but without using the term "polarization compass") in "Insect-Inspired Flight Control for Small Flying Robots" (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate: Bees use sky polarization patterns in ultraviolet (UV) light, caused by Rayleigh scattering of sunlight by atmospheric gas molecules, as direction references relative to the apparent position of the Sun. A robotic direction-finding technique based on this concept would be more robust in comparison with a technique based on the direction to the visible Sun because the UV polarization pattern is distributed across the entire sky and, hence, is redundant and can be extrapolated from a small region of clear sky in an elsewhere cloudy sky that hides the Sun.

  3. Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System.

    PubMed

    Hinas, Ajmal; Roberts, Jonathan M; Gonzalez, Felipe

    2017-12-17

    In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery of payloads or spot spraying applications in site-specific crop management. A downward-looking camera attached to a multirotor is used to find the target on the ground. The UAV descends to the target and hovers above the target for a few seconds to inspect the target. A high-level decision algorithm based on an OODA (observe, orient, decide, and act) loop was developed as a solution to address the problem. Navigation of the UAV was achieved by continuously sending local position messages to the autopilot via Mavros. The proposed system performed hovering above the target in three different stages: locate, descend, and hover. The system was tested in multiple trials, in simulations and outdoor tests, from heights of 10 m to 40 m. Results show that the system is highly reliable and robust to sensor errors, drift, and external disturbance.

  4. Bats Use Path Integration Rather Than Acoustic Flow to Assess Flight Distance along Flyways.

    PubMed

    Aharon, Gal; Sadot, Meshi; Yovel, Yossi

    2017-12-04

    Navigation can be achieved using different strategies from simple beaconing to complex map-based movement [1-4]. Bats display remarkable navigation capabilities, ranging from nightly commutes of several kilometers and up to seasonal migrations over thousands of kilometers [5]. Many bats have been suggested to fly along fixed routes termed "flyways," when flying from their roost to their foraging sites [6]. Flyways commonly stretch along linear landscape elements such as tree lines, hedges, or rivers [7]. When flying along a flyway, bats must estimate the distance they have traveled in order to determine when to turn. This can be especially challenging when moving along a repetitive landscape. Some bats, like Kuhl's pipistrelles, which we studied here, have limited vision [8] and were suggested to rely on bio-sonar for navigation. These bats could therefore estimate distance using three main sensory-navigation strategies, all of which we have examined: acoustic flow, acoustic landmarks, or path integration. We trained bats to fly along a linear flyway and land on a platform. We then tested their behavior when the platform was removed under different manipulations, including changing the acoustic flow, moving the start point, and adding wind. We found that bats do not require acoustic flow, which was hypothesized to be important for their navigation [9-15], and that they can perform the task without landmarks. Our results suggest that Kuhl's pipistrelles use internal self-motion cues-also known as path integration-rather than external information to estimate flight distance for at least dozens of meters when navigating along linear flyways. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Creating the Future after Job Loss.

    ERIC Educational Resources Information Center

    McKnight, Richard

    1991-01-01

    Typical reactions to job loss are Victim, Survivor, and Navigator responses. A training program can help participants acknowledge their feelings, identify positive ways to manage change, understand the phases of change, learn stress management techniques, visualize their desired futures, and plan for achieving their vision. (SK)

  6. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC EMC > GLOBAL BRANCH > GFS > HOME Home Implementations Documentation References Products Model Guidance Performance Developers VLab GLOBAL FORECAST SYSTEM Global Data

  7. Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.

    2004-01-01

    NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.

  8. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  9. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  10. Implementing the President's Vision: JPL and NASA's Exploration Systems Mission Directorate

    NASA Technical Reports Server (NTRS)

    Sander, Michael J.

    2006-01-01

    As part of the NASA team the Jet Propulsion Laboratory is involved in the Exploration Systems Mission Directorate (ESMD) work to implement the President's Vision for Space exploration. In this slide presentation the roles that are assigned to the various NASA centers to implement the vision are reviewed. The plan for JPL is to use the Constellation program to advance the combination of science an Constellation program objectives. JPL's current participation is to contribute systems engineering support, Command, Control, Computing and Information (C3I) architecture, Crew Exploration Vehicle, (CEV) Thermal Protection System (TPS) project support/CEV landing assist support, Ground support systems support at JSC and KSC, Exploration Communication and Navigation System (ECANS), Flight prototypes for cabin atmosphere instruments

  11. Cortical visual dysfunction in children: a clinical study.

    PubMed

    Dutton, G; Ballantyne, J; Boyd, G; Bradnam, M; Day, R; McCulloch, D; Mackie, R; Phillips, S; Saunders, K

    1996-01-01

    Damage to the cerebral cortex was responsible for impairment in vision in 90 of 130 consecutive children referred to the Vision Assessment Clinic in Glasgow. Cortical blindness was seen in 16 children. Only 2 were mobile, but both showed evidence of navigational blind-sight. Cortical visual impairment, in which it was possible to estimate visual acuity but generalised severe brain damage precluded estimation of cognitive visual function, was observed in 9 children. Complex disorders of cognitive vision were seen in 20 children. These could be divided into five categories and involved impairment of: (1) recognition, (2) orientation, (3) depth perception, (4) perception of movement and (5) simultaneous perception. These disorders were observed in a variety of combinations. The remaining children showed evidence of reduced visual acuity and/ or visual field loss, but without detectable disorders of congnitive visual function. Early recognition of disorders of cognitive vision is required if active training and remediation are to be implemented.

  12. Modeling of pilot's visual behavior for low-level flight

    NASA Astrophysics Data System (ADS)

    Schulte, Axel; Onken, Reiner

    1995-06-01

    Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.

  13. Bio-inspired vision based robot control using featureless estimations of time-to-contact.

    PubMed

    Zhang, Haijie; Zhao, Jianguo

    2017-01-31

    Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.

  14. Natural Models for Autonomous Control of Spatial Navigation, Sensing, and Guidance

    DTIC Science & Technology

    2013-06-26

    mantis shrimp, inspired largely by our efforts, can be found at: The Oatmeal - http://theoatmeal.com/ comics /mantis_shrimp. Courtesy of these guys...Ecology and Environmental Education (20-21 January, Tainan, Taiwan). 14 M How, NJ Marshall 2012 Polarisation vision, an unexplored channel for

  15. Navigating Public-Private Partnerships: Introducing the Continuum of Control

    ERIC Educational Resources Information Center

    DiMartino, Catherine

    2014-01-01

    In many urban districts, the public education landscape is being transformed as private-sector providers such as educational management organizations, charter management organizations, and partner support organizations partner with or run district schools. While some private-sector providers' visions for school reform have remained static…

  16. Cybertherapy 2005: A Decade of VR

    DTIC Science & Technology

    2005-07-01

    headphones, which delivered a soundscape updated in real time according to their movement in the virtual town. In the third condition, they were asked to...navigate in a soundscape in the absence of vision (A). The sounds were produced through tracked binaural rendering (HRTF) and were dependent upon the

  17. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  18. Fully Self-Contained Vision-Aided Navigation and Landing of a Micro Air Vehicle Independent from External Sensor Inputs

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry

    2012-01-01

    Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.

  19. Autonomous Vision Navigation for Spacecraft in Lunar Orbit

    NASA Astrophysics Data System (ADS)

    Bader, Nolan A.

    NASA aims to achieve unprecedented navigational reliability for the first manned lunar mission of the Orion spacecraft in 2023. A technique for accomplishing this is to integrate autonomous feature tracking as an added means of improving position and velocity estimation. In this thesis, a template matching algorithm and optical sensor are tested onboard three simulated lunar trajectories using linear covariance techniques under various conditions. A preliminary characterization of the camera gives insight into its ability to determine azimuth and elevation angles to points on the surface of the Moon. A navigation performance analysis shows that an optical camera sensor can aid in decreasing position and velocity errors, particularly in a loss of communication scenario. Furthermore, it is found that camera quality and computational capability are driving factors affecting the performance of such a system.

  20. Enhanced Image-Aided Navigation Algorithm with Automatic Calibration and Affine Distortion Prediction

    DTIC Science & Technology

    2012-03-01

    Lowe, David G. “Distinctive Image Features from Scale-Invariant Keypoints”. International Journal of Computer Vision, 2004. 13. Maybeck, Peter S...Fairfax Drive - 3rd Floor Arlington,VA 22203 Dr. Stefanie Tompkins ; (703)248–1540; Stefanie.Tompkins@darpa.mil DARPA Distribution A. Approved for Public

  1. Neuropsychological Components of Imagery Processing, Final Technical Report.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.

    High-level visual processes make use of stored information, and are invoked during object identification, navigation, tracking, and visual mental imagery. The work presented in this document has resulted in a theory of the component "processing subsystems" used in high-level vision. This theory was developed by considering…

  2. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask

  3. Investigating Architectural Issues in Neuromorphic Computing

    DTIC Science & Technology

    2012-05-01

    term grasp. Some of these include learning, vision , audition and olfaction , ability to navigate an environment, and goal seeking. These abilities have...17 Figure 14: Word/sentence level accuracy versus the ambiguity: (a) Word accuracy vs . letter ambiguity, (b) (b) Sentence...accuracy vs . letter ambiguity, and (c) (b) Sentence accuracy vs . word ambiguity

  4. Sky light polarization detection with linear polarizer triplet in light field camera inspired by insect vision.

    PubMed

    Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin

    2015-10-20

    Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.

  5. [Georg Schlöndorff-the father of computer-assisted surgery].

    PubMed

    Mösges, R

    2016-09-01

    Georg Schlöndorff (1931-2011) developed the idea of computer-assisted surgery (CAS) during his time as professor and chairman of the Department of Otorhinolaryngology at the Medical Faculty of the University of Aachen, Germany. In close cooperation with engineers and physicists, he succeeded in translating this concept into a functional prototype that was applied in live surgery in the operating theatre. The first intervention performed with this image-guided navigation system was a skull base surgical procedure 1987. During the following years, this concept was extended to orbital surgery, neurosurgery, mid-facial traumatology, and brachytherapy of solid tumors in the head and neck region. Further technical developments of this first prototype included touchless optical positioning and the computer vision concept with three orthogonal images, which is still common in contemporary navigation systems. During his time as emeritus professor from 1996, Georg Schlöndorff further pursued his concept of CAS by developing technical innovations such as computational fluid dynamics (CFD).

  6. Backtracking behaviour in lost ants: an additional strategy in their navigational toolkit

    PubMed Central

    Wystrach, Antoine; Schwarz, Sebastian; Baniel, Alice; Cheng, Ken

    2013-01-01

    Ants use multiple sources of information to navigate, but do not integrate all this information into a unified representation of the world. Rather, the available information appears to serve three distinct main navigational systems: path integration, systematic search and the use of learnt information—mainly via vision. Here, we report on an additional behaviour that suggests a supplemental system in the ant's navigational toolkit: ‘backtracking’. Homing ants, having almost reached their nest but, suddenly displaced to unfamiliar areas, did not show the characteristic undirected headings of systematic searches. Instead, these ants backtracked in the compass direction opposite to the path that they had just travelled. The ecological function of this behaviour is clear as we show it increases the chances of returning to familiar terrain. Importantly, the mechanistic implications of this behaviour stress an extra level of cognitive complexity in ant navigation. Our results imply: (i) the presence of a type of ‘memory of the current trip’ allowing lost ants to take into account the familiar view recently experienced, and (ii) direct sharing of information across different navigational systems. We propose a revised architecture of the ant's navigational toolkit illustrating how the different systems may interact to produce adaptive behaviours. PMID:23966644

  7. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    NASA Astrophysics Data System (ADS)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  8. Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.

    2015-01-01

    A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.

  9. An assessment of auditory-guided locomotion in an obstacle circumvention task.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2016-06-01

    This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.

  10. Evaluation of Fused Synthetic and Enhanced Vision Display Concepts for Low-Visibility Approach and Landing

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.

    2009-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.; Kareti, S.; Shi, Weimin

    A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors considermore » the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.« less

  12. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    PubMed Central

    Yang, Kailun; Wang, Kaiwei; Romera, Eduardo; Hu, Weijian; Sun, Dongming; Sun, Junwei; Cheng, Ruiqi; Chen, Tianxue; López, Elena

    2018-01-01

    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. PMID:29748508

  13. Improving the Audio Game-Playing Performances of People with Visual Impairments through Multimodal Training

    ERIC Educational Resources Information Center

    Balan, Oana; Moldoveanu, Alin; Moldoveanu, Florica; Nagy, Hunor; Wersenyi, Gyorgy; Unnporsson, Runar

    2017-01-01

    Introduction: As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory…

  14. Charter Starters Leadership Training Workbook 1: Start-Up Logistics.

    ERIC Educational Resources Information Center

    Ley, Joyce

    This workbook is the first in a series devoted to all areas of charter-school development. It addresses the logistics of starting a school, such as drafting a charter, creating a vision and mission, accessing expert information, navigating the application process, acquiring a facility, establishing a legal entity, and contracting for services. The…

  15. A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.

    ERIC Educational Resources Information Center

    Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David

    2003-01-01

    A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…

  16. Acceptance of the Long Cane by Persons Who Are Blind in South India

    ERIC Educational Resources Information Center

    Christy, Beula; Nirmalan, Praveen K.

    2006-01-01

    Human beings both sense the immediate environment and navigate beyond the immediately perceptible environment to find their way (Golledge, Loomis, Klatzky, Flury, & Yang, 1991; Golledge, Klatzky, & Loomis, 1996; Blasch, Wiener, & Welsh, 1997). People who are visually impaired (that is, are blind or have low vision) often lack the…

  17. A Sustained Proximity Network for Multi-Mission Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Soloff, Jason A.; Noreen, Gary; Deutsch, Leslie; Israel, David

    2005-01-01

    Tbe Vision for Space Exploration calls for an aggressive sequence of robotic missions beginning in 2008 to prepare for a human return to the Moon by 2020, with the goal of establishing a sustained human presence beyond low Earth orbit. A key enabler of exploration is reliable, available communication and navigation capabilities to support both human and robotic missions. An adaptable, sustainable communication and navigation architecture has been developed by Goddard Space Flight Center and the Jet Propulsion Laboratory to support human and robotic lunar exploration through the next two decades. A key component of the architecture is scalable deployment, with the infrastructure evolving as needs emerge, allowing NASA and its partner agencies to deploy an interoperable communication and navigation system in an evolutionary way, enabling cost effective, highly adaptable systems throughout the lunar exploration program.

  18. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  19. Vision and Control for UAVs: A Survey of General Methods and of Inexpensive Platforms for Infrastructure Inspection

    PubMed Central

    Máthé, Koppány; Buşoniu, Lucian

    2015-01-01

    Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608

  20. Machine Vision Applied to Navigation of Confined Spaces

    NASA Technical Reports Server (NTRS)

    Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.

    2004-01-01

    The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.

  1. National Positioning, Navigation, and Timing Architecture Study

    NASA Astrophysics Data System (ADS)

    van Dyke, K.; Vicario, J.; Hothem, L.

    2007-12-01

    The purpose of the National Positioning, Navigation and Timing (PNT) Architecture effort is to help guide future PNT system-of-systems investment and implementation decisions. The Assistant Secretary of Defense for Networks and Information Integration and the Under Secretary of Transportation for Policy sponsored a National PNT Architecture study to provide more effective and efficient PNT capabilities focused on the 2025 timeframe and an evolutionary path for government provided systems and services. U.S. Space-Based PNT Policy states that the U.S. must continue to improve and maintain GPS, augmentations to GPS, and back-up capabilities to meet growing national, homeland, and economic security needs. PNT touches almost every aspect of people´s lives today. PNT is essential for Defense and Civilian applications ranging from the Department of Defense´s Joint network centric and precision operations to the transportation and telecommunications sectors, improving efficiency, increasing safety, and being more productive. Absence of an approved PNT architecture results in uncoordinated research efforts, lack of clear developmental paths, potentially wasteful procurements and inefficient deployment of PNT resources. The national PNT architecture effort evaluated alternative future mixes of global (space and non space-based) and regional PNT solutions, PNT augmentations, and autonomous PNT capabilities to address priorities identified in the DoD PNT Joint Capabilities Document (JCD) and civil equivalents. The path to achieving the Should-Be architecture is described by the National PNT Architecture's Guiding Principles, representing an overarching Vision of the US' role in PNT, an architectural Strategy to fulfill that Vision, and four Vectors which support the Strategy. The National PNT Architecture effort has developed nineteen recommendations. Five foundational recommendations are tied directly to the Strategy while the remaining fourteen individually support one of the Vectors, as will be described in this presentation. The results of this effort will support future decisions of bodies such as the DoD PNT and Civil Pos/Nav Executive Committees, as well as the National Space-Based PNT Executive Committee (EXCOM).

  2. A simple approach to a vision-guided unmanned vehicle

    NASA Astrophysics Data System (ADS)

    Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye

    2005-10-01

    This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.

  3. Recent CESAR (Center for Engineering Systems Advanced Research) research activities in sensor based reasoning for autonomous machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; de Saussure, G.; Spelt, P.F.

    1988-01-01

    This paper describes recent research activities at the Center for Engineering Systems Advanced Research (CESAR) in the area of sensor based reasoning, with emphasis being given to their application and implementation on our HERMIES-IIB autonomous mobile vehicle. These activities, including navigation and exploration in a-priori unknown and dynamic environments, goal recognition, vision-guided manipulation and sensor-driven machine learning, are discussed within the framework of a scenario in which an autonomous robot is asked to navigate through an unknown dynamic environment, explore, find and dock at the panel, read and understand the status of the panel's meters and dials, learn the functioningmore » of a process control panel, and successfully manipulate the control devices of the panel to solve a maintenance emergency problems. A demonstration of the successful implementation of the algorithms on our HERMIES-IIB autonomous robot for resolution of this scenario is presented. Conclusions are drawn concerning the applicability of the methodologies to more general classes of problems and implications for future work on sensor-driven reasoning for autonomous robots are discussed. 8 refs., 3 figs.« less

  4. Designation and verification of road markings detection and guidance method

    NASA Astrophysics Data System (ADS)

    Wang, Runze; Jian, Yabin; Li, Xiyuan; Shang, Yonghong; Wang, Jing; Zhang, JingChuan

    2018-01-01

    With the rapid development of China's space industry, digitization and intelligent is the tendency of the future. This report is present a foundation research about guidance system which based on the HSV color space. With the help of these research which will help to design the automatic navigation and parking system for the frock transport car and the infrared lamp homogeneity intelligent test equipment. The drive mode, steer mode as well as the navigation method was selected. In consideration of the practicability, it was determined to use the front-wheel-steering chassis. The steering mechanism was controlled by the stepping motors, and it is guided by Machine Vision. The optimization and calibration of the steering mechanism was made. A mathematical model was built and the objective functions was constructed for the steering mechanism. The extraction method of the steering line was studied and the motion controller was designed and optimized. The theory of HSV, RGB color space and analysis of the testing result will be discussed Using the function library OPENCV on the Linux system to fulfill the camera calibration. Based on the HSV color space to design the guidance algorithm.

  5. A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.

    PubMed

    DeSouza, Guilherme N; Kak, Avinash C

    2004-10-01

    We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."

  6. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  7. Definition of display/control requirements for assault transport night/adverse weather capability

    NASA Technical Reports Server (NTRS)

    Milelli, R. J.; Mowery, G. W.; Pontelandolfo, C.

    1982-01-01

    A Helicopter Night Vision System was developed to improve low-altitude night and/or adverse weather assult transport capabilities. Man-in-the-loop simulation experiments were performed to define the minimum display and control requirements for the assult transport mission and investigate forward looking infrared sensor requirements, along with alternative displays such as panel mounted displays (PMD) helmet mounted displays (HMD), and integrated control display units. Also explored were navigation requirements, pilot/copilot interaction, and overall cockpit arrangement. Pilot use of an HMD and copilot use of a PMD appear as both the preferred and most effective night navigation combination.

  8. High resolution hybrid optical and acoustic sea floor maps (Invited)

    NASA Astrophysics Data System (ADS)

    Roman, C.; Inglis, G.

    2013-12-01

    This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.

  9. Amblypygids: Model Organisms for the Study of Arthropod Navigation Mechanisms in Complex Environments?

    PubMed Central

    Wiegmann, Daniel D.; Hebets, Eileen A.; Gronenberg, Wulfila; Graving, Jacob M.; Bingman, Verner P.

    2016-01-01

    Navigation is an ideal behavioral model for the study of sensory system integration and the neural substrates associated with complex behavior. For this broader purpose, however, it may be profitable to develop new model systems that are both tractable and sufficiently complex to ensure that information derived from a single sensory modality and path integration are inadequate to locate a goal. Here, we discuss some recent discoveries related to navigation by amblypygids, nocturnal arachnids that inhabit the tropics and sub-tropics. Nocturnal displacement experiments under the cover of a tropical rainforest reveal that these animals possess navigational abilities that are reminiscent, albeit on a smaller spatial scale, of true-navigating vertebrates. Specialized legs, called antenniform legs, which possess hundreds of olfactory and tactile sensory hairs, and vision appear to be involved. These animals also have enormous mushroom bodies, higher-order brain regions that, in insects, integrate contextual cues and may be involved in spatial memory. In amblypygids, the complexity of a nocturnal rainforest may impose navigational challenges that favor the integration of information derived from multimodal cues. Moreover, the movement of these animals is easily studied in the laboratory and putative neural integration sites of sensory information can be manipulated. Thus, amblypygids could serve as model organisms for the discovery of neural substrates associated with a unique and potentially sophisticated navigational capability. The diversity of habitats in which amblypygids are found also offers an opportunity for comparative studies of sensory integration and ecological selection pressures on navigation mechanisms. PMID:27014008

  10. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  11. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  12. Marine and Hydrokinetic Renewable Energy Technologies: Potential Navigational Impacts and Mitigation Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cool, Richard, M.; Hudon, Thomas, J.; Basco, David, R.

    2009-12-10

    On April 15, 2008, the Department of Energy (DOE) issued a Funding Opportunity Announcement for Advanced Water Power Projects which included a Topic Area for Marine and Hydrokinetic Renewable Energy Market Acceleration Projects. Within this Topic Area, DOE identified potential navigational impacts of marine and hydrokinetic renewable energy technologies and measures to prevent adverse impacts on navigation as a sub-topic area. DOE defines marine and hydrokinetic technologies as those capable of utilizing one or more of the following resource categories for energy generation: ocean waves; tides or ocean currents; free flowing water in rivers or streams; and energy generation frommore » the differentials in ocean temperature. PCCI was awarded Cooperative Agreement DE-FC36-08GO18177 from the DOE to identify the potential navigational impacts and mitigation measures for marine hydrokinetic technologies, as summarized herein. The contract also required cooperation with the U.S. Coast Guard (USCG) and two recipients of awards (Pacific Energy Ventures and reVision) in a sub-topic area to develop a protocol to identify streamlined, best-siting practices. Over the period of this contract, PCCI and our sub-consultants, David Basco, Ph.D., and Neil Rondorf of Science Applications International Corporation, met with USCG headquarters personnel, with U.S. Army Corps of Engineers headquarters and regional personnel, with U.S. Navy regional personnel and other ocean users in order to develop an understanding of existing practices for the identification of navigational impacts that might occur during construction, operation, maintenance, and decommissioning. At these same meetings, “standard” and potential mitigation measures were discussed so that guidance could be prepared for project developers. Concurrently, PCCI reviewed navigation guidance published by the USCG and international community. This report summarizes the results of this effort, provides guidance in the form of a checklist for assessing the navigational impacts of potential marine and hydrokinetic projects, and provides guidance for improving the existing navigational guidance promulgated by the USCG in Navigation Vessel Inspection Circular 02 07. At the request of the USCG, our checklist and mitigation guidance was written in a generic nature so that it could be equally applied to offshore wind projects. PCCI teleconferenced on a monthly basis with DOE, Pacific Energy Ventures and reVision in order to share information and review work products. Although the focus of our effort was on marine and hydrokinetic technologies, as defined above, this effort drew upon earlier work by the USCG on offshore wind renewable energy installations. The guidance provided herein can be applied equally to marine and hydrokinetic technologies and to offshore wind, which are collectively referred to by the USCG as Renewable Energy Installations.« less

  13. Radiological outcomes of pinless navigation in total knee arthroplasty: a randomized controlled trial.

    PubMed

    Chen, Jerry Yongqiang; Chin, Pak Lin; Li, Zongxian; Yew, Andy Khye Soon; Tay, Darren Keng Jin; Chia, Shi-Lu; Lo, Ngai Nung; Yeo, Seng Jin

    2015-12-01

    This study aimed to investigate the accuracy of pinless navigation (BrainLAB(®) VectorVision(®) Knee 2.5 Navigation System) as an intra-operative alignment guide in total knee arthroplasty (TKA). The authors hypothesized that pinless navigation would reduce the proportion of outliers in conventional TKA, without a significant increase in the duration of surgery. Between 2011 and 2012, 100 patients scheduled for a unilateral primary TKA were randomized into two groups: pinless navigation and conventional surgery. All TKAs were performed with the surgical aim of achieving neutral coronal alignment with a 180° mechanical axis. The primary outcomes of this study were post-operative radiographic assessment of lower limb alignment using hip-knee-ankle angle (HKA) and components placement using coronal femoral-component angle (CFA) and coronal tibia-component angle (CTA). There was a smaller proportion of outliers for HKA, CFA and CTA at 10, 2 and 2 % respectively, in the pinless navigation group, compared to 32, 16 and 16 %, respectively, in the conventional group (p = 0.013, p = 0.032 and p = 0.032, respectively). The mean CFA was also more accurate at 90° in the pinless navigation group compared to 91° in the conventional group (p = 0.002). There was no difference in the duration of surgery between the two groups (n.s.). Pinless navigation improves lower limb alignment and components placement without a significant increase in the duration of surgery. The authors recommend the use of pinless navigation to verify the coronal alignments of conventional cutting blocks in TKA before the bone cuts are made. I.

  14. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  15. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  16. Changing Course: navigating the future of the Lower Mississippi River

    NASA Astrophysics Data System (ADS)

    Cochran, S.

    2016-02-01

    Changing Course is a design competition to reimagine a more sustainable Lower Mississippi River Delta, bringing teams together from around the world to create innovative visions for one of America's greatest natural resources. Building off of Louisiana's Coastal Master Plan, and answering a key question from that plan, three winning teams (Baird & Associates, Moffatt & Nichol and Studio Misi-Ziibi) have generated designs for how the Mississippi River's water and sediment can be used to maximize rebuilding of delta wetlands while also continuing to meet the needs of navigation, flood protection, and coastal industries and communities. While each of the winning teams offered a different vision, all three identified the same key requirements as critical to sustaining the Mississippi River Delta today and into the future: Reconnecting the Mississippi River to its wetlands to help restore southeast Louisiana's first line of defense against powerful storms and rising sea levels. Planning for a more sustainable delta, including a gradual shift in population to create more protected and resilient communities. Protecting and maximizing the region's port and maritime activities, including a deeper more sustainable navigation channel upriver from Southwest Pass. Increasing economic opportunities in a future smaller delta through expanding shipping capacity, coastal restoration infrastructure, outdoor recreation and tourism and commercial fishing. This session will give a high level overview of the design competition process, results and common themes, similarities and differences in their designs, and how the ideas generated will inform coastal stakeholders and official government processes.

  17. DOE Research and Development Accomplishments: Visions of Success I

    Science.gov Websites

    RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery × Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R

  18. 33 CFR 164.15 - Navigation bridge visibility.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... obscured by more than the lesser of two ship lengths or 500 meters (1640 feet) from dead ahead to 10... the vessel, through dead ahead, to at least 22.5 degrees abaft the beam on the other side of the... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...

  19. 33 CFR 164.15 - Navigation bridge visibility.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... obscured by more than the lesser of two ship lengths or 500 meters (1640 feet) from dead ahead to 10... the vessel, through dead ahead, to at least 22.5 degrees abaft the beam on the other side of the... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...

  20. 33 CFR 164.15 - Navigation bridge visibility.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... obscured by more than the lesser of two ship lengths or 500 meters (1640 feet) from dead ahead to 10... the vessel, through dead ahead, to at least 22.5 degrees abaft the beam on the other side of the... of vision must extend over an arc from at least 45 degrees on the opposite bow, through dead ahead...

  1. Simulation Platform for Vision Aided Inertial Navigation

    DTIC Science & Technology

    2014-09-18

    Brown , R. G., & Hwang , P. Y. (1992). Introduction to Random Signals and Applied Kalman Filtering (2nd ed.). New York: John Wiley & Son. Chowdhary, G...Parameters for Various Timing Standards ( Brown & Hwang , 1992...were then calculated using the true PVA information from the ASPN data. Next, a two-state clock from ( Brown & Hwang , 1992) was used to model the

  2. Instructed Vision: Navigating Grammatical Rules by Using Landmarks for Linguistic Structures in Corrective Feedback Sequences

    ERIC Educational Resources Information Center

    Majlesi, Ali Reza

    2018-01-01

    This study aims to show how multimodality, that is, the mobilization of various communicative resources in social actions (Mondada, 2016), can be used to teach grammar. Drawing on ethnomethodological conversation analysis (Sacks, 1992), the article provides a detailed analysis of 2 corrective feedback sequences in a Swedish-as-a-second-language…

  3. General Aviation Flight Test of Advanced Operations Enabled by Synthetic Vision

    NASA Technical Reports Server (NTRS)

    Glaab, Louis J.; Hughhes, Monica F.; Parrish, Russell V.; Takallu, Mohammad A.

    2014-01-01

    A flight test was performed to compare the use of three advanced primary flight and navigation display concepts to a baseline, round-dial concept to assess the potential for advanced operations. The displays were evaluated during visual and instrument approach procedures including an advanced instrument approach resembling a visual airport traffic pattern. Nineteen pilots from three pilot groups, reflecting the diverse piloting skills of the General Aviation pilot population, served as evaluation subjects. The experiment had two thrusts: 1) an examination of the capabilities of low-time (i.e., <400 hours), non-instrument-rated pilots to perform nominal instrument approaches, and 2) an exploration of potential advanced Visual Meteorological Conditions (VMC)-like approaches in Instrument Meteorological Conditions (IMC). Within this context, advanced display concepts are considered to include integrated navigation and primary flight displays with either aircraft attitude flight directors or Highway In The Sky (HITS) guidance with and without a synthetic depiction of the external visuals (i.e., synthetic vision). Relative to the first thrust, the results indicate that using an advanced display concept, as tested herein, low-time, non-instrument-rated pilots can exhibit flight-technical performance, subjective workload and situation awareness ratings as good as or better than high-time Instrument Flight Rules (IFR)-rated pilots using Baseline Round Dials for a nominal IMC approach. For the second thrust, the results indicate advanced VMC-like approaches are feasible in IMC, for all pilot groups tested for only the Synthetic Vision System (SVS) advanced display concept.

  4. Recent Experiences of the NASA Engineering and Safety Center (NESC) Guidance Navigation and Control (GN and C) Technical Discipline Team (TDT)

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2011-01-01

    The NASA Engineering and Safety Center (NESC) is an independently funded NASA Program whose dedicated team of technical experts provides objective engineering and safety assessments of critical, high risk projects. NESC's strength is rooted in the diverse perspectives and broad knowledge base that add value to its products, affording customers a responsive, alternate path for assessing and preventing technical problems while protecting vital human and national resources. The Guidance Navigation and Control (GN&C) Technical Discipline Team (TDT) is one of fifteen such discipline-focused teams within the NESC organization. The TDT membership is composed of GN&C specialists from across NASA and its partner organizations in other government agencies, industry, national laboratories, and universities. This paper will briefly define the vision, mission, and purpose of the NESC organization. The role of the GN&C TDT will then be described in detail along with an overview of how this team operates and engages in its objective engineering and safety assessments of critical NASA.

  5. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  6. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  7. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  8. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  9. [Personnel with poor vision at fighter pilot school].

    PubMed

    Corbé, C; Menu, J P

    1997-10-01

    The piloting of fighting aircraft, the navigation of space-shuttle, the piloting of an helicopter in tactical flight at an altitude of 50 metres require the use of all sensorial, ocular, vestibular, proprioceptive ... sensors. So, the selection and the follow-up of these aerial engines' pilots need a very complete study of medical parameters, in particular sensorial and notably visual system. The doctors and the expert researchers in Aeronautical and spatial Medicine of the Army Health Department, which have in charge the medical supervision of flight crew, should study, create, and improve tests of visual sensorial exploration developed from fundamental and applied research. These authenticated tests with military pilots were applied in ophthalmology for the estimation of normal and deficient vision. A proposition to change norms of World Health Organisation applied to the vision has been following these to low visual persons was equally introduced.

  10. NASA Precision Landing Technologies Completes Initial Flight Tests on Vertical Testbed Rocket

    NASA Image and Video Library

    2017-04-19

    This 2-minute, 40-second video shows how over the past 5 weeks, NASA and Masten Space Systems teams have prepared for and conducted sub-orbital rocket flight tests of next-generation lander navigation technology through the CoOperative Blending of Autonomous Landing Technologies (COBALT) project. The COBALT payload was integrated onto Masten’s rocket, Xodiac. The Xodiac vehicle used the Global Positioning System (GPS) for navigation during this first campaign, which was intentional to verify and refine COBALT system performance. The joint teams conducted numerous ground verification tests, made modifications in the process, practiced and refined operations’ procedures, conducted three tether tests, and have now flown two successful free flights. This successful, collaborative campaign has provided the COBALT and Xodiac teams with the valuable performance data needed to refine the systems and prepare them for the second flight test campaign this summer when the COBALT system will navigate the Xodiac rocket to a precision landing. The technologies within COBALT provide a spacecraft with knowledge during entry, descent, and landing that enables it to precisely navigate and softly land close to surface locations that have been previously too risky to target with current capabilities. The technologies will enable future exploration destinations on Mars, the moon, Europa, and other planets and moons. The two primary navigation components within COBALT include the Langley Research Center’s Navigation Doppler Lidar, which provides ultra-precise velocity and line-of-sight range measurements, and Jet Propulsion Laboratory’s Lander Vision System (LVS), which provides navigation estimates relative to an existing surface map. The integrated system is being flight tested onboard a Masten suborbital rocket vehicle called Xodiac. The COBALT project is led by the Johnson Space Center, with funding provided through the Game Changing Development, Flight Opportunities program, and Advanced Exploration Systems programs. Based at NASA’s Armstrong Flight Research Center in Edwards, CA, the Flight Opportunities program funds technology development flight tests on commercial suborbital space providers of which Masten is a vendor. The program has previously tested the LVS on the Masten rocket and validated the technology for the Mars 2020 rover.

  11. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  12. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  13. Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) on the Vision II turbine rotorcraft UAV over the Florida Keys

    NASA Astrophysics Data System (ADS)

    Holasek, R. E.; Nakanishi, K.; Swartz, B.; Zacaroli, R.; Hill, B.; Naungayan, J.; Herwitz, S.; Kavros, P.; English, D. C.

    2013-12-01

    As part of the NASA ROSES program, the NovaSol Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) was flown as the payload on the unmanned Vision II helicopter. The goal of the May 2013 data collection was to obtain high resolution visible and near-infrared (visNIR) hyperspectral data of seagrasses and coral reefs in the Florida Keys. The specifications of the SHARK hyperspectral system and the Vision II turbine rotorcraft will be described along with the process of integrating the payload to the vehicle platform. The minimal size, weight, and power (SWaP) specifications of the SHARK system is an ideal match to the Vision II helicopter and its flight parameters. One advantage of the helicopter over fixed wing platforms is its inherent ability to take off and land in a limited area and without a runway, enabling the UAV to be located in close proximity to the experiment areas and the science team. Decisions regarding integration times, waypoint selection, mission duration, and mission frequency are able to be based upon the local environmental conditions and can be modified just prior to take off. The operational procedures and coordination between the UAV pilot, payload operator, and scientist will be described. The SHARK system includes an inertial navigation system and digital elevation model (DEM) which allows image coordinates to be calculated onboard the aircraft in real-time. Examples of the geo-registered images from the data collection will be shown. SHARK mounted below VTUAV. SHARK deployed on VTUAV over water.

  14. Direct endoscopic video registration for sinus surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel; Taylor, Russell H.; Ishii, Masaru; Hager, Gregory D.

    2009-02-01

    Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable a navigation system to track directly through video eliminating the need for an external tracking system. Video registration provides the means for a direct interface between an endoscope and a navigation system and allows a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location. Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance point to surface error. We show our algorithm has a mean closest distance error of .2268mm.

  15. Improved obstacle avoidance and navigation for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.

    2015-01-01

    This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.

  16. The use of navigation (BrainLAB Vector vision(2)) and intraoperative 3D imaging system (Siemens Arcadis Orbic 3D) in the treatment of gunshot wounds of the maxillofacial region.

    PubMed

    Gröbe, Alexander; Weber, Christoph; Schmelzle, Rainer; Heiland, Max; Klatt, Jan; Pohlenz, Philipp

    2009-09-01

    Gunshot wounds are a rare occurrence during times of peace. The removal of projectiles is recommended; in some cases, however, this is a controversy. The reproduction of a projectile image can be difficult if it is not adjacent to an anatomical landmark. Therefore, navigation systems give the surgeon continuous real-time orientation intraoperatively. The aim of this study was to report our experiences for image-guided removal of projectiles and the resulting intra- and postoperative complications. We investigated 50 patients retrospectively; 32 had image-guided surgical removal of projectiles in the oral and maxillofacial region. Eighteen had surgical removal of projectiles without navigation assistance. There was a significant correlation (p = 0.0136) between the navigated surgery vs. not-navigated surgery and complication rate, including major bleeding (n = 4 vs. n = 1, 8% vs. 2%), soft tissue infections (n = 7 vs. n = 2, 14% vs. 4%), and nerval damage (n = 2 vs. n = 0, 4% vs. 0%; p = 0.038) and between the operating time and postoperative complications. A high tendency between operating time and navigated surgery (p = 0.1103) was shown. When using navigation system, we could reduce operating time. In conclusion, there is a significant correlation between reduced intra- and postoperative complications, including wound infections, nerval damage, and major bleeding, and the appropriate use of a navigation system. In all these cases, we could present reduced operating time. Cone-beam computed tomography plays an important role in detecting projectiles or metallic foreign bodies intraoperatively.

  17. Development Of Autonomous Systems

    NASA Astrophysics Data System (ADS)

    Kanade, Takeo

    1989-03-01

    In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.

  18. Real-time Implementation of Vision, Inertial, and GPS Sensors to Navigate in an Urban Environment

    DTIC Science & Technology

    2015-03-01

    25) where RN is the meridian radius of curvature, RE is the transverse radius of the curvature, e is the major eccentricity of the ellipsoid, R is the...for On-Road Vehicles with 1- Point RANSAC [17]. Scaramuzza/et al discuss the use of nonholonomic constraints of a wheeled vehicle, that has an imagery

  19. Family Planning and Family Vision in Mothers after Diagnosis of a Child with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Navot, Noa; Jorgenson, Alicia Grattan; Vander Stoep, Ann; Toth, Karen; Webb, Sara Jane

    2016-01-01

    The diagnosis of a child with autism has short- and long-term impacts on family functioning. With early diagnosis, the diagnostic process is likely to co-occur with family planning decisions, yet little is known about how parents navigate this process. This study explores family planning decision making process among mothers of young children with…

  20. Looking Back and Looking Forward: Reprising the Promise and Predicting the Future of Formation Flying and Spaceborne GPS Navigation Systems

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Dennehy, Neil

    2015-01-01

    A retrospective consideration of two 15-year old Guidance, Navigation and Control (GN&C) technology 'vision' predictions will be the focus of this paper. A look back analysis and critique of these late 1990s technology roadmaps out-lining the future vision, for two then nascent, but rapidly emerging, GN&C technologies will be performed. Specifically, these two GN&C technologies were: 1) multi-spacecraft formation flying and 2) the spaceborne use and exploitation of global positioning system (GPS) signals to enable formation flying. This paper reprises the promise of formation flying and spaceborne GPS as depicted in the cited 1999 and 1998 papers. It will discuss what happened to cause that promise to be mostly unfulfilled and the reasons why the envisioned formation flying dream has yet to become a reality. The recent technology trends over the past few years will then be identified and a renewed government interest in spacecraft formation flying/cluster flight will be highlighted. The authors will conclude with a reality-tempered perspective, 15 years after the initial technology roadmaps were published, predicting a promising future of spacecraft formation flying technology development over the next decade.

  1. Understanding human visual systems and its impact on our intelligent instruments

    NASA Astrophysics Data System (ADS)

    Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.

    2013-09-01

    We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.

  2. Synthetic Vision Enhances Situation Awareness and RNP Capabilities for Terrain-Challenged Approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III

    2003-01-01

    The Synthetic Vision Systems (SVS) Project of Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-Up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation / Terrain Awareness and Warning System displays. These independent variables were evaluated for situation awareness, path error, and workload while making approaches to Runway 25 and 07 and during simulated engine-out Cottonwood 2 and KREMM departures. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the pathway and pursuit guidance used within the SVS concepts achieved required navigation performance (RNP) criteria.

  3. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  4. Biologically inspired collision avoidance system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.

    2009-05-01

    In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.

  5. From brain synapses to systems for learning and memory: Object recognition, spatial navigation, timed conditioning, and movement control.

    PubMed

    Grossberg, Stephen

    2015-09-24

    This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Computer-assisted surgery of the paranasal sinuses: technical and clinical experience with 368 patients, using the Vector Vision Compact system.

    PubMed

    Stelter, K; Andratschke, M; Leunig, A; Hagedorn, H

    2006-12-01

    This paper presents our experience with a navigation system for functional endoscopic sinus surgery. In this study, we took particular note of the surgical indications and risks and the measurement precision and preparation time required, and we present one brief case report as an example. Between 2000 and 2004, we performed functional endoscopic sinus surgery on 368 patients at the Ludwig Maximilians University, Munich, Germany. We used the Vector Vision Compact system (BrainLAB) with laser registration. The indications for surgery ranged from severe nasal polyps and chronic sinusitis to malignant tumours of the paranasal sinuses and skull base. The time needed for data preparation was less than five minutes. The time required for preparation and patient registration depended on the method used and the experience of the user. In the later cases, it took 11 minutes on average, using Z-Touch registration. The clinical plausibility test produced an average deviation of 1.3 mm. The complications of system use comprised one intra-operative re-registration (18 per cent) and one complete failure (5 per cent). Despite the assistance of an accurate working computer, the anterior ethmoidal artery was incised in one case. However, in all 368 cases, we experienced no cerebrospinal fluid leaks, optic nerve lesions, retrobulbar haematomas or intracerebral bleeding. There were no deaths. From our experience with computer-guided surgical procedures, we conclude that computer-guided navigational systems are so accurate that the risk of misleading the surgeon is minimal. In the future, their use in certain specialized procedures will be not only sensible but mandatory. We recommend their use not only in difficult surgical situations but also in routine procedures and for surgical training.

  7. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  8. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  9. Orbital navigation, docking and obstacle avoidance as a form of three dimensional model-based image understanding

    NASA Technical Reports Server (NTRS)

    Beyer, J.; Jacobus, C.; Mitchell, B.

    1987-01-01

    Range imagery from a laser scanner can be used to provide sufficient information for docking and obstacle avoidance procedures to be performed automatically. Three dimensional model-based computer vision algorithms in development can perform these tasks even with targets which may not be cooperative (that is, objects without special targets or markers to provide unambiguous location points). Roll, pitch and yaw of the vehicle can be taken into account as image scanning takes place, so that these can be corrected when the image is converted from egocentric to world coordinates. Other attributes of the sensor, such as the registered reflectence and texture channels, provide additional data sources for algorithm robustness. Temporal fusion of sensor immages can take place in the work coordinate domain, allowing for the building of complex maps in three dimensional space.

  10. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  11. Higher Education in Hong Kong: A Case Study of Universities Navigating through the Asian Economic Crisis

    ERIC Educational Resources Information Center

    Stevenson, Phoebe Hsu

    2010-01-01

    Since the establishment of the University of Hong Kong in 1911, higher education in Hong Kong has been transformed from an elitist system to one that supports the Hong Kong government's vision of a highly educated workforce and widely accessible lifelong learning. Between the late 1970s and 1994 the system expanded from admitting 2% of college-age…

  12. Personal Aircraft Point to the Future of Transportation

    NASA Technical Reports Server (NTRS)

    2010-01-01

    NASA's Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, as well as a number of Agency innovations, have helped Duluth, Minnesota-based Cirrus Design Corporation become one of the world's leading manufacturers of general aviation aircraft. SBIRs with Langley Research Center provided the company with cost-effective composite airframe manufacturing methods, while crashworthiness testing at the Center increased the safety of its airplanes. Other NASA-derived technologies on Cirrus SR20 and SR22 aircraft include synthetic vision systems that help pilots navigate and full-plane parachutes that have saved the lives of more than 30 Cirrus pilots and passengers to date. Today, the SR22 is the world's top-selling Federal Aviation Administration (FAA)-certified single-engine airplane.

  13. A development of intelligent entertainment robot for home life

    NASA Astrophysics Data System (ADS)

    Kim, Cheoltaek; Lee, Ju-Jang

    2005-12-01

    The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.

  14. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  15. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    NASA Astrophysics Data System (ADS)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  16. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  17. A method of real-time detection for distant moving obstacles by monocular vision

    NASA Astrophysics Data System (ADS)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  18. Learning for autonomous navigation : extrapolating from underfoot to the far field

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Turmon, Michael; Howard, Andrew; Angelova, Anelia; Tang, Benyang; Mjolsness, Eric

    2005-01-01

    Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of 3-D sensors and by the difficulty of preprogramming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3-D geometry and learning from proprioception, and describe initial instantiations of them we have developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain, learning to extend the lookahead range of the vision system, and learning how slip varies with slope.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.

    Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only themore » minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.« less

  20. Olfaction, navigation, and the origin of isocortex

    PubMed Central

    Aboitiz, Francisco; Montiel, Juan F.

    2015-01-01

    There are remarkable similarities between the brains of mammals and birds in terms of microcircuit architecture, despite obvious differences in gross morphology and development. While in reptiles and birds the most expanding component (the dorsal ventricular ridge) displays an overall nuclear shape and derives from the lateral and ventral pallium, in mammals a dorsal pallial, six-layered isocortex shows the most remarkable elaboration. Regardless of discussions about possible homologies between mammalian and avian brains, a main question remains in explaining the emergence of the mammalian isocortex, because it represents a unique phenotype across amniotes. In this article, we propose that the origin of the isocortex was driven by behavioral adaptations involving olfactory driven goal-directed and navigating behaviors. These adaptations were linked with increasing sensory development, which provided selective pressure for the expansion of the dorsal pallium. The latter appeared as an interface in olfactory-hippocampal networks, contributing somatosensory information for navigating behavior. Sensory input from other modalities like vision and audition were subsequently recruited into this expanding region, contributing to multimodal associative networks. PMID:26578863

  1. The Sensor Test for Orion RelNav Risk Mitigation Development Test Objective

    NASA Technical Reports Server (NTRS)

    Christian, John A.; Hinkel, Heather; Maguire, Sean

    2011-01-01

    The Sensor Test for Orion Relative-Navigation Risk Mitigation (STORRM) Development Test Objective (DTO) ew aboard the Space Shuttle Endeavour on STS-134, and was designed to characterize the performance of the ash LIDAR being developed for the Orion. This ash LIDAR, called the Vision Navigation Sensor (VNS), will be the primary navigation instrument used by the Orion vehicle during rendezvous, proximity operations, and docking. This paper provides an overview of the STORRM test objectives and the concept of operations. It continues with a description of the STORRM's major hardware compo nents, which include the VNS and the docking camera. Next, an overview of crew and analyst training activities will describe how the STORRM team prepared for flight. Then an overview of how insight data collection and analysis actually went is presented. Key ndings and results from this project are summarized, including a description of "truth" data. Finally, the paper concludes with lessons learned from the STORRM DTO.

  2. COBALT: Development of a Platform to Flight Test Lander GN&C Technologies on Suborbital Rockets

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Seubert, Carl R.; Amzajerdian, Farzin; Bergh, Chuck; Kourchians, Ara; Restrepo, Carolina I.; Villapando, Carlos Y.; O'Neal, Travis V.; Robertson, Edward A.; Pierrottet, Diego; hide

    2017-01-01

    The NASA COBALT Project (CoOperative Blending of Autonomous Landing Technologies) is developing and integrating new precision-landing Guidance, Navigation and Control (GN&C) technologies, along with developing a terrestrial fight-test platform for Technology Readiness Level (TRL) maturation. The current technologies include a third- generation Navigation Doppler Lidar (NDL) sensor for ultra-precise velocity and line- of-site (LOS) range measurements, and the Lander Vision System (LVS) that provides passive-optical Terrain Relative Navigation (TRN) estimates of map-relative position. The COBALT platform is self contained and includes the NDL and LVS sensors, blending filter, a custom compute element, power unit, and communication system. The platform incorporates a structural frame that has been designed to integrate with the payload frame onboard the new Masten Xodiac vertical take-o, vertical landing (VTVL) terrestrial rocket vehicle. Ground integration and testing is underway, and terrestrial fight testing onboard Xodiac is planned for 2017 with two flight campaigns: one open-loop and one closed-loop.

  3. Evaluation of navigation interfaces in virtual environments

    NASA Astrophysics Data System (ADS)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  4. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  5. COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas

    2016-01-01

    The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.

  6. Immune systems are not just for making you feel better: they are for controlling autonomous robots

    NASA Astrophysics Data System (ADS)

    Rosenblum, Mark

    2005-05-01

    The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.

  7. Lunar Navigation with Libration Point Orbiters and GPS

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    NASA is currently studying a Vision for Space Exploration based on spiral development of robotic and piloted missions to the moon and Mars, but research into how to perform such missions has continued ever since the first era of lunar exploration. One area of study that a number of researchers have pursued is libration point navigation and communication relay concepts. These concepts would appear to support many of NASA's current requirements for navigation and communications coverage for human and robotic spacecraft operating in lunar space and beyond. In trading libration point concepts against other options, designers must consider issues such as the number of spacecraft, required to provide coverage, insertion and stationkeeping costs, power and data rate requirements, frequency allocations, and many others. The libration points, along with a typical cis-lunar trajectory, are equilibrium locations for an infinitesimal mass in the rotating coordinate system that follows the motion of two massive bodies in circular orbits with respect to their common barycenter. There are three co-linear points along the line connecting the massive bodies: between the bodies, beyond the secondary body, and beyond the primary body. The relative distances of these points along the line connecting the bodies depend on the mass ratios. There are also two points that form equilateral triangles with the massive bodies. Ideally, motion in the neighborhood of the co-linear points is unstable, while motion near the equilibrium points is stable. However, in the real world, the motions are highly perturbed so that a satellite will require stationkeeping maneuvers.

  8. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

    PubMed Central

    Everding, Lukas; Conradt, Jörg

    2018-01-01

    In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386

  9. Development of a Night Vision Goggle Heads-Up Display for Paratrooper Guidance

    DTIC Science & Technology

    2008-06-01

    and GPS data [MIC07]. requiring altitude, position, velocity, acceleration, and angular rates for navigation or control. An internal GPS receiver...Language There are several programming languages that provide the operating capabilities for this program. Languages like JAVA and C# provide an...acceleration, and angular rates. Figure 3.6 illustrates the MIDG hardware’s input and output data. The sensor actually generates the INS data, which is

  10. Integrated Multi-Aperture Sensor and Navigation Fusion

    DTIC Science & Technology

    2010-02-01

    Visio, Springer-Verlag Inc., New York, 2004. [3] R. G. Brown and P. Y. C. Hwang , Introduction to Random Signals and Applied Kalman Filtering, Third...formulate Kalman filter vision/inertial measurement observables for other images without the need to know (or measure) their feature ranges. As compared...Internal Data Fusion Multi-aperture/INS data fusion is formulated in the feature domain using the complementary Kalman filter methodology [3]. In this

  11. VISION: Illuminating the Pathways to a Clean Energy Economy - JISEA 2016 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-03-01

    This report demonstrates JISEA's successes over the past year and previews our coming work. The 2016 Annual Report highlights JISEA accomplishments in low-carbon electricity system research, international collaboration, clean energy manufacturing analysis, 21st century innovation strategy, and more. As we look to the coming year, JISEA will continue to navigate complex issues, present unique perspectives, and envision a clean energy economy.

  12. Divergence of dim-light vision among bats (order: Chiroptera) as estimated by molecular and electrophysiological methods.

    PubMed

    Liu, He-Qun; Wei, Jing-Kuan; Li, Bo; Wang, Ming-Shan; Wu, Rui-Qi; Rizak, Joshua D; Zhong, Li; Wang, Lu; Xu, Fu-Qiang; Shen, Yong-Yi; Hu, Xin-Tian; Zhang, Ya-Ping

    2015-06-23

    Dim-light vision is present in all bats, but is divergent among species. Old-World fruit bats (Pteropodidae) have fully developed eyes; the eyes of insectivorous bats are generally degraded, and these bats rely on well-developed echolocation. An exception is the Emballonuridae, which are capable of laryngeal echolocation but prefer to use vision for navigation and have normal eyes. In this study, integrated methods, comprising manganese-enhanced magnetic resonance imaging (MEMRI), f-VEP and RNA-seq, were utilized to verify the divergence. The results of MEMRI showed that Pteropodidae bats have a much larger superior colliculus (SC)/ inferior colliculus (IC) volume ratio (3:1) than insectivorous bats (1:7). Furthermore, the absolute visual thresholds (log cd/m(2)•s) of Pteropodidae (-6.30 and -6.37) and Emballonuridae (-3.71) bats were lower than those of other insectivorous bats (-1.90). Finally, genes related to the visual pathway showed signs of positive selection, convergent evolution, upregulation and similar gene expression patterns in Pteropodidae and Emballonuridae bats. Different results imply that Pteropodidae and Emballonuridae bats have more developed vision than the insectivorous bats and suggest that further research on bat behavior is warranted.

  13. Magnifying Smartphone Screen Using Google Glass for Low-Vision Users.

    PubMed

    Pundlik, Shrinivas; HuaQi Yi; Rui Liu; Peli, Eli; Gang Luo

    2017-01-01

    Magnification is a key accessibility feature used by low-vision smartphone users. However, small screen size can lead to loss of context and make interaction with magnified displays challenging. We hypothesize that controlling the viewport with head motion can be natural and help in gaining access to magnified displays. We implement this idea using a Google Glass that displays the magnified smartphone screenshots received in real time via Bluetooth. Instead of navigating with touch gestures on the magnified smartphone display, the users can view different screen locations by rotating their head, and remotely interacting with the smartphone. It is equivalent to looking at a large virtual image through a head contingent viewing port, in this case, the Glass display with ~ 15 ° field of view. The system can transfer seven screenshots per second at 8 × magnification, sufficient for tasks where the display content does not change rapidly. A pilot evaluation of this approach was conducted with eight normally sighted and four visually impaired subjects performing assigned tasks using calculator and music player apps. Results showed that performance in the calculation task was faster with the Glass than with the phone's built-in screen zoom. We conclude that head contingent scanning control can be beneficial in navigating magnified small smartphone displays, at least for tasks involving familiar content layout.

  14. Physics-based simulations of aerial attacks by peregrine falcons reveal that stooping at high speed maximizes catch success against agile prey.

    PubMed

    Mills, Robin; Hildenbrandt, Hanno; Taylor, Graham K; Hemelrijk, Charlotte K

    2018-04-01

    The peregrine falcon Falco peregrinus is renowned for attacking its prey from high altitude in a fast controlled dive called a stoop. Many other raptors employ a similar mode of attack, but the functional benefits of stooping remain obscure. Here we investigate whether, when, and why stooping promotes catch success, using a three-dimensional, agent-based modeling approach to simulate attacks of falcons on aerial prey. We simulate avian flapping and gliding flight using an analytical quasi-steady model of the aerodynamic forces and moments, parametrized by empirical measurements of flight morphology. The model-birds' flight control inputs are commanded by their guidance system, comprising a phenomenological model of its vision, guidance, and control. To intercept its prey, model-falcons use the same guidance law as missiles (pure proportional navigation); this assumption is corroborated by empirical data on peregrine falcons hunting lures. We parametrically vary the falcon's starting position relative to its prey, together with the feedback gain of its guidance loop, under differing assumptions regarding its errors and delay in vision and control, and for three different patterns of prey motion. We find that, when the prey maneuvers erratically, high-altitude stoops increase catch success compared to low-altitude attacks, but only if the falcon's guidance law is appropriately tuned, and only given a high degree of precision in vision and control. Remarkably, the optimal tuning of the guidance law in our simulations coincides closely with what has been observed empirically in peregrines. High-altitude stoops are shown to be beneficial because their high airspeed enables production of higher aerodynamic forces for maneuvering, and facilitates higher roll agility as the wings are tucked, each of which is essential to catching maneuvering prey at realistic response delays.

  15. Spatial navigation by congenitally blind individuals.

    PubMed

    Schinazi, Victor R; Thrash, Tyler; Chebat, Daniel-Robert

    2016-01-01

    Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over-reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. For further resources related to this article, please visit the WIREs website. © 2015 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  16. Electromagnetic Navigation Bronchoscopy-directed Pleural Tattoo to Aid Surgical Resection of Peripheral Pulmonary Lesions.

    PubMed

    Tay, Jun H; Wallbridge, Peter D; Larobina, Marco; Russell, Prudence A; Irving, Louis B; Steinfort, Daniel P

    2016-07-01

    Limited (wedge) resection of pulmonary lesions is frequently performed as a diagnostic/therapeutic procedure. Some lesions may be difficult to locate thoracoscopically with conversion to open thoracotomy or incomplete resection being potential limitations to this approach. Multiple methods have been described to aid video-assisted thoracoscopic surgical (VATS) wedge resection of pulmonary nodules, including hookwire localization, percutaneous tattoo, or intraoperative ultrasound. We report on our experience using electromagnetic navigation bronchoscopic dye marking of small subpleural lesions to aid VATS wedge resection. A retrospective cohort study of consecutive patients undergoing VATS wedge resection of peripheral lesions. Preoperative bronchoscopy with electromagnetic navigation was utilized to guide a 25 G needle to within/adjacent to the target lesion with injection of 1 mL of methylene blue or indigo carmine under fluoroscopic vision. Six patients underwent bronchoscopic marking of peripheral pulmonary lesions, navigation deemed successful in all patients, with no procedural complications. Surgery was performed within 24 hours of bronchoscopic marking. Pleural staining by dye was visible thoracoscopically in all 6 lesions either adjacent to or overlying the lesion. All lesions were fully excised with wedge resection. Pathologic examination confirmed accuracy of dye staining. Electromagnetic navigation bronchoscopic dye marking of peripheral lesions is feasible, without complications commonly associated with percutaneous marking procedures. Further experience is required but early findings suggest that this method may have utility in aiding minimally invasive resection of small subpleural lesions.

  17. Vision-based flight control in the hawkmoth Hyles lineata

    PubMed Central

    Windsor, Shane P.; Bomphrey, Richard J.; Taylor, Graham K.

    2014-01-01

    Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths’ responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths’ responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight. PMID:24335557

  18. Evaluation of Alternate Concepts for Synthetic Vision Flight Displays With Weather-Penetrating Sensor Image Inserts During Simulated Landing Approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    2003-01-01

    A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.

  19. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.

    PubMed

    Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe

    2017-10-16

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

  20. Vision-based flight control in the hawkmoth Hyles lineata.

    PubMed

    Windsor, Shane P; Bomphrey, Richard J; Taylor, Graham K

    2014-02-06

    Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths' responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths' responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight.

  1. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application

    PubMed Central

    Vassallo, Raquel

    2017-01-01

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334

  2. Results of PRISMA/FFIORD extended mission and applicability to future formation flying and active debris removal missions

    NASA Astrophysics Data System (ADS)

    Delpech, Michel; Berges, Jean-Claude; Karlsson, Thomas; Malbet, Fabien

    2013-07-01

    CNES performed several experiments during the extended PRISMA mission which started in August 2011. A first session in October 2011 addressed two objectives: 1) demonstrate angles-only navigation to rendezvous with a non-cooperative object; 2) exercise transitions between RF-based and vision-based control during final formation acquisition. A complementary experiment in September 2012 mimicked some future astrometry mission and implemented the manoeuvres required to point the two satellite axis to a celestial target and maintain it fixed during some observation period. In the first sections, the paper presents the experiment motivations, describes its main design features including the guidance and control algorithms evolutions and provides a synthesis of the most significant results along with a discussion of the lessons learned. In the last part, the paper evokes the applicability of these experiment results to some active debris removal mission concept that is currently being studied.

  3. Navigating the current landscape of clinical genetic testing for inherited retinal dystrophies.

    PubMed

    Lee, Kristy; Garg, Seema

    2015-04-01

    Inherited eye disorders are a significant cause of vision loss. Genetic testing can be particularly helpful for patients with inherited retinal dystrophies because of genetic heterogeneity and overlapping phenotypes. The need to identify a molecular diagnosis for retinal dystrophies is particularly important in the era of developing novel gene therapy-based treatments, such as the RPE65 gene-based clinical trials and others on the horizon, as well as recent advances in reproductive options. The introduction of massively parallel sequencing technologies has significantly advanced the identification of novel gene candidates and has expanded the landscape of genetic testing. In a relatively short time clinical medicine has progressed from limited testing options to a plethora of choices ranging from single-gene testing to whole-exome sequencing. This article outlines currently available genetic testing and factors to consider when selecting appropriate testing for patients with inherited retinal dystrophies.

  4. Adaptive learning compressive tracking based on Markov location prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan

    2017-03-01

    Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.

  5. Development of a GPS/INS/MAG navigation system and waypoint navigator for a VTOL UAV

    NASA Astrophysics Data System (ADS)

    Meister, Oliver; Mönikes, Ralf; Wendel, Jan; Frietsch, Natalie; Schlaile, Christian; Trommer, Gert F.

    2007-04-01

    Unmanned aerial vehicles (UAV) can be used for versatile surveillance and reconnaissance missions. If a UAV is capable of flying automatically on a predefined path the range of possible applications is widened significantly. This paper addresses the development of the integrated GPS/INS/MAG navigation system and a waypoint navigator for a small vertical take-off and landing (VTOL) unmanned four-rotor helicopter with a take-off weight below 1 kg. The core of the navigation system consists of low cost inertial sensors which are continuously aided with GPS, magnetometer compass, and a barometric height information. Due to the fact, that the yaw angle becomes unobservable during hovering flight, the integration with a magnetic compass is mandatory. This integration must be robust with respect to errors caused by the terrestrial magnetic field deviation and interferences from surrounding electronic devices as well as ferrite metals. The described integration concept with a Kalman filter overcomes the problem that erroneous magnetic measurements yield to an attitude error in the roll and pitch axis. The algorithm provides long-term stable navigation information even during GPS outages which is mandatory for the flight control of the UAV. In the second part of the paper the guidance algorithms are discussed in detail. These algorithms allow the UAV to operate in a semi-autonomous mode position hold as well an complete autonomous waypoint mode. In the position hold mode the helicopter maintains its position regardless of wind disturbances which ease the pilot job during hold-and-stare missions. The autonomous waypoint navigator enable the flight outside the range of vision and beyond the range of the radio link. Flight test results of the implemented modes of operation are shown.

  6. Guidance, Navigation, and Control Technology Assessment for Future Planetary Science Missions

    NASA Technical Reports Server (NTRS)

    Beauchamp, Pat; Cutts, James; Quadrelli, Marco B.; Wood, Lincoln J.; Riedel, Joseph E.; McHenry, Mike; Aung, MiMi; Cangahuala, Laureano A.; Volpe, Rich

    2013-01-01

    Future planetary explorations envisioned by the National Research Council's (NRC's) report titled Vision and Voyages for Planetary Science in the Decade 2013-2022, developed for NASA Science Mission Directorate (SMD) Planetary Science Division (PSD), seek to reach targets of broad scientific interest across the solar system. This goal requires new capabilities such as innovative interplanetary trajectories, precision landing, operation in close proximity to targets, precision pointing, multiple collaborating spacecraft, multiple target tours, and advanced robotic surface exploration. Advancements in Guidance, Navigation, and Control (GN&C) and Mission Design in the areas of software, algorithm development and sensors will be necessary to accomplish these future missions. This paper summarizes the key GN&C and mission design capabilities and technologies needed for future missions pursuing SMD PSD's scientific goals.

  7. Flight Testing of Terrain-Relative Navigation and Large-Divert Guidance on a VTVL Rocket

    NASA Technical Reports Server (NTRS)

    Trawny, Nikolas; Benito, Joel; Tweddle, Brent; Bergh, Charles F.; Khanoyan, Garen; Vaughan, Geoffrey M.; Zheng, Jason X.; Villalpando, Carlos Y.; Cheng, Yang; Scharf, Daniel P.; hide

    2015-01-01

    Since 2011, the Autonomous Descent and Ascent Powered-Flight Testbed (ADAPT) has been used to demonstrate advanced descent and landing technologies onboard the Masten Space Systems (MSS) Xombie vertical-takeoff, vertical-landing suborbital rocket. The current instantiation of ADAPT is a stand-alone payload comprising sensing and avionics for terrain-relative navigation and fuel-optimal onboard planning of large divert trajectories, thus providing complete pin-point landing capabilities needed for planetary landers. To this end, ADAPT combines two technologies developed at JPL, the Lander Vision System (LVS), and the Guidance for Fuel Optimal Large Diverts (G-FOLD) software. This paper describes the integration and testing of LVS and G-FOLD in the ADAPT payload, culminating in two successful free flight demonstrations on the Xombie vehicle conducted in December 2014.

  8. Automated site characterization for robotic sample acquisition systems

    NASA Astrophysics Data System (ADS)

    Scholl, Marija S.; Eberlein, Susan J.

    1993-04-01

    A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.

  9. Multi-Sensor Person Following in Low-Visibility Scenarios

    PubMed Central

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506

  10. Multi-sensor person following in low-visibility scenarios.

    PubMed

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.

  11. Obstacle Detection using Binocular Stereo Vision in Trajectory Planning for Quadcopter Navigation

    NASA Astrophysics Data System (ADS)

    Bugayong, Albert; Ramos, Manuel, Jr.

    2018-02-01

    Quadcopters are one of the most versatile unmanned aerial vehicles due to its vertical take-off and landing as well as hovering capabilities. This research uses the Sum of Absolute Differences (SAD) block matching algorithm for stereo vision. A complementary filter was used in sensor fusion to combine obtained quadcopter orientation data from the accelerometer and the gyroscope. PID control was implemented for the motor control and VFH+ algorithm was implemented for trajectory planning. Results show that the quadcopter was able to consistently actuate itself in the roll, yaw and z-axis during obstacle avoidance but was however found to be inconsistent in the pitch axis during forward and backward maneuvers due to the significant noise present in the pitch axis angle outputs compared to the roll and yaw axes.

  12. Study of robot landmark recognition with complex background

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Yang, Jia

    2007-12-01

    It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.

  13. Three visions of doctoring: a Gadamerian dialogue.

    PubMed

    Chin-Yee, Benjamin; Messinger, Atara; Young, L Trevor

    2018-04-16

    Medicine in the twenty-first century faces an 'identity crisis,' as it grapples with the emergence of various 'ways of knowing,' from evidence-based and translational medicine, to narrative-based and personalized medicine. While each of these approaches has uniquely contributed to the advancement of patient care, this pluralism is not without tension. Evidence-based medicine is not necessary individualized; personalized medicine may be individualized but is not necessarily person-centered. As novel technologies and big data continue to proliferate today, the focus of medical practice is shifting away from the dialogic encounter between doctor and patient, threatening the loss of humanism that many view as integral to medicine's identity. As medical trainees, we struggle to synthesize medicine's diverse and evolving 'ways of knowing' and to create a vision of doctoring that integrates new forms of medical knowledge into the provision of person-centered care. In search of answers, we turned to twentieth-century philosopher Hans-Georg Gadamer, whose unique outlook on "health" and "healing," we believe, offers a way forward in navigating medicine's 'messy pluralism.' Drawing inspiration from Gadamer's emphasis on dialogue and 'practical wisdom' (phronesis), we initiated a dialogue with the dean of our medical school to address the question of how medical trainees and practicing clinicians alike can work to create a more harmonious pluralism in medicine today. We propose that implementing a pluralistic approach ultimately entails 'bridging' the current divide between scientific theory and the practical art of healing, and involves an iterative and dialogic process of asking questions and seeking answers.

  14. Image Understanding Workshop. Proceedings of a Workshop Held in Los Angeles, California on 23-25 February 1987. Volume 1

    DTIC Science & Technology

    1987-02-01

    Vehicle Second 1. Proc. IEEE , Workshop on Motion: Representation and Quarterly Report ," Martin Marietta , Denver, Colorado Analysis, Kiwah Island Resort...Grenmbani Mitch Nathan, John D. Bradstreet; Martin Marietta Denver Aerospace ............ 127 "Vision and Navigation for the Carnegie Mellon Navlab...pp. 409-414. To support both reasoning and feature extraction at real time speeds, we require specialized hardware. The [4] Martin Marietta Denver

  15. New Directions in the Detection of Polarized Light

    DTIC Science & Technology

    2011-01-01

    vision and draws from anatomical and behavioural studies as well as optics . Horvath et al. [3] polish this and the whole volume off with a fascinating...example of biomimicry . It is important to recognize that all work presented in this special issue has grown from the inspirational efforts of our...atmospheric optical prerequisites allowing polarimetric navigation by Viking seafarers. Phil. Trans. R. Soc. B 366, 772–782. (doi:10.1098/rstb.2010.0194) 4

  16. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  17. Occlusions in Camera Networks and Vision: The Bridge between Topological Recovery and Metric Reconstruction

    DTIC Science & Technology

    2009-05-18

    serves as a didactic tool to understand the information required for the approach to coordinate free tracking and navigation problems. Observe that the...layout (left), and in the CN -Complex (right). These paths can be compared by using the algebraic topological tools covered in chapter 2. . . . 34 3.9...right). mathematical tools necessary to make our discussion formal; chapter 3 will present the construction of a simplicial representation called

  18. Range Image Processing for Local Navigation of an Autonomous Land Vehicle.

    DTIC Science & Technology

    1986-09-01

    such as doing long term exploration missions on the surface of the planets which mankind may wish to investigate . Certainly, mankind will soon return...intelligence programming, walking technology, and vision sensors to name but a few. 10 The purpose of this thesis will be to investigate , by simulation...bitmap graphics, both of which are important to this simulation. Finally, the methodology for displaying the symbolic information generated by the

  19. Center for Neural Engineering: applications of pulse-coupled neural networks

    NASA Astrophysics Data System (ADS)

    Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel

    1999-03-01

    Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.

  20. Research on position and orientation measurement method for roadheader based on vision/INS

    NASA Astrophysics Data System (ADS)

    Yang, Jinyong; Zhang, Guanqin; Huang, Zhe; Ye, Yaozhong; Ma, Bowen; Wang, Yizhong

    2018-01-01

    Roadheader which is a kind of special equipment for large tunnel excavation has been widely used in Coal Mine. It is one of the main mechanical-electrical equipment for mine production and also has been regarded as the core equipment for underground tunnel driving construction. With the deep application of the rapid driving system, underground tunnel driving methods with higher automation level are required. In this respect, the real-time position and orientation measurement technique for roadheader is one of the most important research contents. For solving the problem of position and orientation measurement automatically in real time for roadheaders, this paper analyses and compares the features of several existing measuring methods. Then a new method based on the combination of monocular vision and strap down inertial navigation system (SINS) would be proposed. By realizing five degree-of-freedom (DOF) measurement of real-time position and orientation of roadheader, this method has been verified by the rapid excavation equipment in Daliuta coal mine. Experiment results show that the accuracy of orientation measurement is better than 0.1°, the standard deviation of static drift is better than 0.25° and the accuracy of position measurement is better than 1cm. It is proved that this method can be used in real-time position and orientation measurement application for roadheader which has a broad prospect in coal mine engineering.

  1. SPARTAN: A High-Fidelity Simulation for Automated Rendezvous and Docking Applications

    NASA Technical Reports Server (NTRS)

    Turbe, Michael A.; McDuffie, James H.; DeKock, Brandon K.; Betts, Kevin M.; Carrington, Connie K.

    2007-01-01

    bd Systems (a subsidiary of SAIC) has developed the Simulation Package for Autonomous Rendezvous Test and ANalysis (SPARTAN), a high-fidelity on-orbit simulation featuring multiple six-degree-of-freedom (6DOF) vehicles. SPARTAN has been developed in a modular fashion in Matlab/Simulink to test next-generation automated rendezvous and docking guidance, navigation,and control algorithms for NASA's new Vision for Space Exploration. SPARTAN includes autonomous state-based mission manager algorithms responsible for sequencing the vehicle through various flight phases based on on-board sensor inputs and closed-loop guidance algorithms, including Lambert transfers, Clohessy-Wiltshire maneuvers, and glideslope approaches The guidance commands are implemented using an integrated translation and attitude control system to provide 6DOF control of each vehicle in the simulation. SPARTAN also includes high-fidelity representations of a variety of absolute and relative navigation sensors that maybe used for NASA missions, including radio frequency, lidar, and video-based rendezvous sensors. Proprietary navigation sensor fusion algorithms have been developed that allow the integration of these sensor measurements through an extended Kalman filter framework to create a single optimal estimate of the relative state of the vehicles. SPARTAN provides capability for Monte Carlo dispersion analysis, allowing for rigorous evaluation of the performance of the complete proposed AR&D system, including software, sensors, and mechanisms. SPARTAN also supports hardware-in-the-loop testing through conversion of the algorithms to C code using Real-Time Workshop in order to be hosted in a mission computer engineering development unit running an embedded real-time operating system. SPARTAN also contains both runtime TCP/IP socket interface and post-processing compatibility with bdStudio, a visualization tool developed by bd Systems, allowing for intuitive evaluation of simulation results. A description of the SPARTAN architecture and capabilities is provided, along with details on the models and algorithms utilized and results from representative missions.

  2. Vision can recalibrate the vestibular reafference signal used to re-establish postural equilibrium following a platform perturbation.

    PubMed

    Toth, Adam J; Harris, Laurence R; Zettel, John; Bent, Leah R

    2017-02-01

    Visuo-vestibular recalibration, in which visual information is used to alter the interpretation of vestibular signals, has been shown to influence both oculomotor control and navigation. Here we investigate whether vision can recalibrate the vestibular feedback used during the re-establishment of equilibrium following a perturbation. The perturbation recovery responses of nine participants were examined following exposure to a period of 11 s of galvanic vestibular stimulation (GVS). During GVS in VISION trials, occlusion spectacles provided 4 s of visual information that enabled participants to correct for the GVS-induced tilt and associate this asymmetric vestibular signal with a visually provided 'upright'. NoVISION trials had no such visual experience. Participants used the visual information to assist in realigning their posture compared to when visual information was not provided (p < 0.01). The initial recovery response to a platform perturbation was not impacted by whether vision had been provided during the preceding GVS, as determined by peak centre of mass and pressure deviations (p = 0.09). However, after using vision to reinterpret the vestibular signal during GVS, final centre of mass and pressure equilibrium positions were significantly shifted compared to trials in which vision was not available (p < 0.01). These findings support previous work identifying a prominent role of vestibular input for re-establishing postural equilibrium following a perturbation. Our work is the first to highlight the capacity for visual feedback to recalibrate the vertical interpretation of vestibular reafference for re-establishing equilibrium following a perturbation. This demonstrates the rapid adaptability of the vestibular reafference signal for postural control.

  3. Web-based system for surgical planning and simulation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.

    1998-10-01

    The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.

  4. Libration Point Navigation Concepts Supporting Exploration Vision

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Gramling, Cheryl J.

    2004-01-01

    Farquhar described several libration point navigation concepts that would appear to support NASA s current exploration vision. One concept is a Lunar Relay Satellite operating in the vicinity of Earth-Moon L2, providing Earth-to-lunar far-side and long- range surface-to-surface navigation and communications capability. Reference [ 1] lists several advantages of such a system in comparison to a lunar orbiting relay satellite constellation. Among these are one or two vs. many satellites for coverage, simplified acquisition and tracking due to very low relative motion, much longer contact times, and simpler antenna pointing. An obvious additional advantage of such a system is that uninterrupted links to Earth avoid performing critical maneuvers "in the blind." Another concept described is the use of Earth-Moon L1 for lunar orbit rendezvous, rather than low lunar orbit as was done for Apollo. This rendezvous technique would avoid large plane change and high fuel cost associated with high latitude landing sites and long stay times. Earth-Moon L1 also offers unconstrained launch windows from the lunar surface. Farquhar claims this technique requires only slightly higher fuel cost than low lunar orbit rendezvous for short-stay equatorial landings. Farquhar also describes an Interplanetary Transportation System that would use libration points as terminals for an interplanetary shuttle. This approach would offer increased operational flexibility in terms of launch windows, rendezvous, aborts, etc. in comparison to elliptical orbit transfers. More recently, other works including Folta[3] and Howell[4] have shown that patching together unstable trajectories departing Earth-Moon libration points with stable trajectories approaching planetary libration points may also offer lower overall fuel costs than elliptical orbit transfers. Another concept Farquhar described was a Deep Space Relay at Earth-Moon IA and/or L5 that would serve as a high data rate optical navigation and communications relay satellite. The advantages in comparison to a geosynchronous relay are minimal Earth occultation, distance from large noise sources on Earth, easier pointing due to smaller relative velocity, and a large baseline for interferometry if both L4 and L5 are used.

  5. Ergonomic design in the operating room: information technologies

    NASA Astrophysics Data System (ADS)

    Morita, Mark M.; Ratib, Osman

    2005-04-01

    The ergonomic design in the Surgical OR of information technology systems has been and continues to be a large problem. Numerous disparate information systems with unique hardware and display configurations create an environment similar to the chaotic environments of air traffic control. Patient information systems tend to show all available statistics making it difficult to isolate the key, relevant vitals for the patient. Interactions in this sterile environment are still being done with the traditional keyboard and mouse designed for cubicle office workflows. This presentation will address the shortcomings of the current design paradigm in the Surgical OR that relate to Information Technology systems. It will offer a perspective that addresses the ergonomic deficiencies and predicts how future technological innovations will integrate into this vision. Part of this vision includes a Surgical OR PACS prototype, developed by GE Healthcare Technologies, that addresses ergonomic challenges of PACS in the OR that include lack of portability, sterile field integrity, and UI targeted for diagnostic radiologists. GWindows (gesture control) developed by Microsoft Research and Voice command will allow for the surgeons to navigate and review diagnostic imagery without using the conventional keyboard and mouse that disrupt the integrity of the sterile field. This prototype also demonstrates how a wireless, battery powered, self contained mobile PACS workstation can be optimally positioned for a surgeon to reference images during an intervention as opposed to the current pre-operative review. Lessons learned from the creation of the Surgical OR PACS Prototype have demonstrated that PACS alone is not the end all solution in the OR. Integration of other disparate information systems and presentation of this information in simple, easy to navigate information packets will enable smoother interactions for the surgeons and other healthcare professionals in the OR. More intuitive IT system interaction is required for all the key players in the OR not just the surgeons. To improve interactions, there are a number of emerging technologies that have the potential to revolutionize the way healthcare professionals interact with computer-based applications in the Surgical OR. A number of these technologies will enable surgeons to interact with vital data without interrupting the sterile field or maneuvering their bodies to view relevant information - information will automatically display for healthcare individuals in a just-in-time manner without navigational challenges.

  6. Mental distress and effort to engage an image-guided navigation system in the surgical training of endoscopic sinus surgery: a prospective, randomised clinical trial.

    PubMed

    Theodoraki, M N; Ledderose, G J; Becker, S; Leunig, A; Arpe, S; Luz, M; Stelter, K

    2015-04-01

    The use of image-guided navigation systems in the training of FESS is discussed controversy. Many experienced sinus surgeons report a better spatial orientation and an improved situational awareness intraoperatively. But many fear that the navigation system could be a disadvantage in the surgical training because of a higher mental demand and a possible loss of surgical skills. This clinical field study investigates mental and physical demands during transnasal surgery with and without the aid of a navigation system at an early stage in FESS training. Thirty-two endonasal sinus surgeries done by eight different trainee surgeons were included. After randomization, one side of each patient was operated by use of a navigation system, the other side without. During the whole surgery, the surgeons were connected to a biofeedback device measuring the heart rate, the heart rate variability, the respiratory frequency and the masticator EMG. Stress situations could be identified by an increase of the heart rate frequency and a decrease of the heart rate variability. The mental workload during a FESS procedure is high compared to the baseline before and after surgery. The mental workload level when using the navigation did not significantly differ from the side without using the navigation. Residents with more than 30 FESS procedures already done, showed a slightly decreased mental workload when using the navigation. An additional workload shift toward the navigation system could not be observed in any surgeon. Remarkable other stressors could be identified during this study: the behavior of the supervisor or the use of the 45° endoscope, other colleagues or students entering the theatre, poor vision due to bleeding and the preoperative waiting when measuring the baseline. The mental load of young surgeons in FESS surgery is tremendous. The application of a navigation system did not cause a higher mental workload or distress. The device showed a positive effort to engage for the trainees with more than 30 FESS procedures done. In this subgroup it even leads to decreased mental workload.

  7. The Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective

    NASA Technical Reports Server (NTRS)

    Christian, John A.; Hinkel, Heather; D'Souza, Christopher N.; Maguire, Sean; Patangan, Mogi

    2011-01-01

    The Sensor Test for Orion Relative-Navigation Risk Mitigation (STORRM) Development Test Objective (DTO) flew aboard the Space Shuttle Endeavour on STS-134 in May- June 2011, and was designed to characterize the performance of the flash LIDAR and docking camera being developed for the Orion Multi-Purpose Crew Vehicle. The flash LIDAR, called the Vision Navigation Sensor (VNS), will be the primary navigation instrument used by the Orion vehicle during rendezvous, proximity operations, and docking. The DC will be used by the Orion crew for piloting cues during docking. This paper provides an overview of the STORRM test objectives and the concept of operations. It continues with a description of STORRM's major hardware components, which include the VNS, docking camera, and supporting avionics. Next, an overview of crew and analyst training activities will describe how the STORRM team prepared for flight. Then an overview of in-flight data collection and analysis is presented. Key findings and results from this project are summarized. Finally, the paper concludes with lessons learned from the STORRM DTO.

  8. Vision-based control for flight relative to dynamic environments

    NASA Astrophysics Data System (ADS)

    Causey, Ryan Scott

    The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.

  9. How Ants Use Vision When Homing Backward.

    PubMed

    Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine

    2017-02-06

    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Divergence of dim-light vision among bats (order: Chiroptera) as estimated by molecular and electrophysiological methods

    PubMed Central

    Liu, He-Qun; Wei, Jing-Kuan; Li, Bo; Wang, Ming-Shan; Wu, Rui-Qi; Rizak, Joshua D.; Zhong, Li; Wang, Lu; Xu, Fu-Qiang; Shen, Yong-Yi; Hu, Xin-Tian; Zhang, Ya-Ping

    2015-01-01

    Dim-light vision is present in all bats, but is divergent among species. Old-World fruit bats (Pteropodidae) have fully developed eyes; the eyes of insectivorous bats are generally degraded, and these bats rely on well-developed echolocation. An exception is the Emballonuridae, which are capable of laryngeal echolocation but prefer to use vision for navigation and have normal eyes. In this study, integrated methods, comprising manganese-enhanced magnetic resonance imaging (MEMRI), f-VEP and RNA-seq, were utilized to verify the divergence. The results of MEMRI showed that Pteropodidae bats have a much larger superior colliculus (SC)/ inferior colliculus (IC) volume ratio (3:1) than insectivorous bats (1:7). Furthermore, the absolute visual thresholds (log cd/m2•s) of Pteropodidae (−6.30 and −6.37) and Emballonuridae (−3.71) bats were lower than those of other insectivorous bats (−1.90). Finally, genes related to the visual pathway showed signs of positive selection, convergent evolution, upregulation and similar gene expression patterns in Pteropodidae and Emballonuridae bats. Different results imply that Pteropodidae and Emballonuridae bats have more developed vision than the insectivorous bats and suggest that further research on bat behavior is warranted. PMID:26100095

  11. Perceiving Collision Impacts in Alzheimer's Disease: The Effect of Retinal Eccentricity on Optic Flow Deficits.

    PubMed

    Kim, Nam-Gyoon

    2015-01-01

    The present study explored whether the optic flow deficit in Alzheimer's disease (AD) reported in the literature transfers to different types of optic flow, in particular, one that specifies collision impacts with upcoming surfaces, with a special focus on the effect of retinal eccentricity. Displays simulated observer movement over a ground plane toward obstacles lying in the observer's path. Optical expansion was modulated by varying [Formula: see text]. The visual field was masked either centrally (peripheral vision) or peripherally (central vision) using masks ranging from 10° to 30° in diameter in steps of 10°. Participants were asked to indicate whether their approach would result in "collision" or "no collision" with the obstacles. Results showed that AD patients' sensitivity to [Formula: see text] was severely compromised, not only for central vision but also for peripheral vision, compared to age- and education-matched elderly controls. The results demonstrated that AD patients' optic flow deficit is not limited to radial optic flow but includes also the optical pattern engendered by [Formula: see text]. Further deterioration in the capacity to extract [Formula: see text] to determine potential collisions in conjunction with the inability to extract heading information from radial optic flow would exacerbate AD patients' difficulties in navigation and visuospatial orientation.

  12. Family planning and family vision in mothers after diagnosis of a child with autism spectrum disorder

    PubMed Central

    Navot, Noa; Jorgenson, Alicia Grattan; Stoep, Ann Vander; Toth, Karen; Webb, Sara Jane

    2016-01-01

    The diagnosis of a child with autism has short- and long-term impacts on family functioning. With early diagnosis, the diagnostic process is likely to co-occur with family planning decisions, yet little is known about how parents navigate this process. This study explores family planning decision making process among mothers of young children with autism spectrum disorder in the United States, by understanding the transformation in family vision before and after the diagnosis. A total of 22 mothers of first born children, diagnosed with autism between 2 and 4 years of age, were interviewed about family vision prior to and after their child’s diagnosis. Grounded Theory method was used for data analysis. Findings indicated that coherence of early family vision, maternal cognitive flexibility, and maternal responses to diagnosis were highly influential in future family planning decisions. The decision to have additional children reflected a high level of adaptability built upon a solid internalized family model and a flexible approach to life. Decision to stop childrearing reflected a relatively less coherent family model and more rigid cognitive style followed by ongoing hardship managing life after the diagnosis. This report may be useful for health-care providers in enhancing therapeutic alliance and guiding family planning counseling. PMID:26395237

  13. Family planning and family vision in mothers after diagnosis of a child with autism spectrum disorder.

    PubMed

    Navot, Noa; Jorgenson, Alicia Grattan; Vander Stoep, Ann; Toth, Karen; Webb, Sara Jane

    2016-07-01

    The diagnosis of a child with autism has short- and long-term impacts on family functioning. With early diagnosis, the diagnostic process is likely to co-occur with family planning decisions, yet little is known about how parents navigate this process. This study explores family planning decision making process among mothers of young children with autism spectrum disorder in the United States, by understanding the transformation in family vision before and after the diagnosis. A total of 22 mothers of first born children, diagnosed with autism between 2 and 4 years of age, were interviewed about family vision prior to and after their child's diagnosis. Grounded Theory method was used for data analysis. Findings indicated that coherence of early family vision, maternal cognitive flexibility, and maternal responses to diagnosis were highly influential in future family planning decisions. The decision to have additional children reflected a high level of adaptability built upon a solid internalized family model and a flexible approach to life. Decision to stop childrearing reflected a relatively less coherent family model and more rigid cognitive style followed by ongoing hardship managing life after the diagnosis. This report may be useful for health-care providers in enhancing therapeutic alliance and guiding family planning counseling. © The Author(s) 2015.

  14. Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments

    DTIC Science & Technology

    2016-09-01

    yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G

  15. Insect Responses to Linearly Polarized Reflections: Orphan Behaviors Without Neural Circuits.

    PubMed

    Heinloth, Tanja; Uhlhorn, Juliane; Wernet, Mathias F

    2018-01-01

    The e-vector orientation of linearly polarized light represents an important visual stimulus for many insects. Especially the detection of polarized skylight by many navigating insect species is known to improve their orientation skills. While great progress has been made towards describing both the anatomy and function of neural circuit elements mediating behaviors related to navigation, relatively little is known about how insects perceive non-celestial polarized light stimuli, like reflections off water, leaves, or shiny body surfaces. Work on different species suggests that these behaviors are not mediated by the "Dorsal Rim Area" (DRA), a specialized region in the dorsal periphery of the adult compound eye, where ommatidia contain highly polarization-sensitive photoreceptor cells whose receptive fields point towards the sky. So far, only few cases of polarization-sensitive photoreceptors have been described in the ventral periphery of the insect retina. Furthermore, both the structure and function of those neural circuits connecting to these photoreceptor inputs remain largely uncharacterized. Here we review the known data on non-celestial polarization vision from different insect species (dragonflies, butterflies, beetles, bugs and flies) and present three well-characterized examples for functionally specialized non-DRA detectors from different insects that seem perfectly suited for mediating such behaviors. Finally, using recent advances from circuit dissection in Drosophila melanogaster , we discuss what types of potential candidate neurons could be involved in forming the underlying neural circuitry mediating non-celestial polarization vision.

  16. Other ways of seeing: From behavior to neural mechanisms in the online “visual” control of action with sensory substitution

    PubMed Central

    Proulx, Michael J.; Gwinnutt, James; Dell’Erba, Sara; Levy-Tzedek, Shelly; de Sousa, Alexandra A.; Brown, David J.

    2015-01-01

    Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action. PMID:26599473

  17. Vision and visual navigation in nocturnal insects.

    PubMed

    Warrant, Eric; Dacke, Marie

    2011-01-01

    With their highly sensitive visual systems, nocturnal insects have evolved a remarkable capacity to discriminate colors, orient themselves using faint celestial cues, fly unimpeded through a complicated habitat, and navigate to and from a nest using learned visual landmarks. Even though the compound eyes of nocturnal insects are significantly more sensitive to light than those of their closely related diurnal relatives, their photoreceptors absorb photons at very low rates in dim light, even during demanding nocturnal visual tasks. To explain this apparent paradox, it is hypothesized that the necessary bridge between retinal signaling and visual behavior is a neural strategy of spatial and temporal summation at a higher level in the visual system. Exactly where in the visual system this summation takes place, and the nature of the neural circuitry that is involved, is currently unknown but provides a promising avenue for future research.

  18. Assessment of feedback modalities for wearable visual aids in blind mobility

    PubMed Central

    Sorrentino, Paige; Bohlool, Shadi; Zhang, Carey; Arditti, Mort; Goodrich, Gregory; Weiland, James D.

    2017-01-01

    Sensory substitution devices engage sensory modalities other than vision to communicate information typically obtained through the sense of sight. In this paper, we examine the ability of subjects who are blind to follow simple verbal and vibrotactile commands that allow them to navigate a complex path. A total of eleven visually impaired subjects were enrolled in the study. Prototype systems were developed to deliver verbal and vibrotactile commands to allow an investigator to guide a subject through a course. Using this mode, subjects could follow commands easily and navigate significantly faster than with their cane alone (p <0.05). The feedback modes were similar with respect to the increased speed for course completion. Subjects rated usability of the feedback systems as “above average” with scores of 76.3 and 90.9 on the system usability scale. PMID:28182731

  19. Evaluation of an intelligent wheelchair system for older adults with cognitive impairments

    PubMed Central

    2013-01-01

    Background Older adults are the most prevalent wheelchair users in Canada. Yet, cognitive impairments may prevent an older adult from being allowed to use a powered wheelchair due to safety and usability concerns. To address this issue, an add-on Intelligent Wheelchair System (IWS) was developed to help older adults with cognitive impairments drive a powered wheelchair safely and effectively. When attached to a powered wheelchair, the IWS adds a vision-based anti-collision feature that prevents the wheelchair from hitting obstacles and a navigation assistance feature that plays audio prompts to help users manoeuvre around obstacles. Methods A two stage evaluation was conducted to test the efficacy of the IWS. Stage One: Environment of Use – the IWS’s anti-collision and navigation features were evaluated against objects found in a long-term care facility. Six different collision scenarios (wall, walker, cane, no object, moving and stationary person) and three different navigation scenarios (object on left, object on right, and no object) were performed. Signal detection theory was used to categorize the response of the system in each scenario. Stage Two: User Trials – single-subject research design was used to evaluate the impact of the IWS on older adults with cognitive impairment. Participants were asked to drive a powered wheelchair through a structured obstacle course in two phases: 1) with the IWS and 2) without the IWS. Measurements of safety and usability were taken and compared between the two phases. Visual analysis and phase averages were used to analyze the single-subject data. Results Stage One: The IWS performed correctly for all environmental anti-collision and navigation scenarios. Stage Two: Two participants completed the trials. The IWS was able to limit the number of collisions that occurred with a powered wheelchair and lower the perceived workload for driving a powered wheelchair. However, the objective performance (time to complete course) of users navigating their environment did not improve with the IWS. Conclusions This study shows the efficacy of the IWS in performing with a potential environment of use, and benefiting members of its desired user population to increase safety and lower perceived demands of powered wheelchair driving. PMID:23924489

  20. Simulating visibility under reduced acuity and contrast sensitivity.

    PubMed

    Thompson, William B; Legge, Gordon E; Kersten, Daniel J; Shakespeare, Robert A; Lei, Quan

    2017-04-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting-design communities. We validate the simulation using a letter-recognition task.

  1. Simulating Visibility Under Reduced Acuity and Contrast Sensitivity

    PubMed Central

    Thompson, William B.; Legge, Gordon E.; Kersten, Daniel J.; Shakespeare, Robert A.; Lei, Quan

    2017-01-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting design communities. We validate the simulation using a letter recognition task. PMID:28375328

  2. Scanning fiber endoscopy with highly flexible, 1-mm catheterscopes for wide-field, full-color imaging

    PubMed Central

    Lee, Cameron M.; Engelbrecht, Christoph J.; Soper, Timothy D.; Helmchen, Fritjof; Seibel, Eric J.

    2011-01-01

    In modern endoscopy, wide field of view and full color are considered necessary for navigating inside the body, inspecting tissue for disease and guiding interventions such as biopsy or surgery. Current flexible endoscope technologies suffer from reduced resolution when device diameter shrinks. Endoscopic procedures today using coherent fiber bundle technology, on the scale of 1 mm, are performed with such poor image quality that the clinician’s vision meets the criteria for legal blindness. Here, we review a new and versatile scanning fiber imaging technology and describe its implementation for ultrathin and flexible endoscopy. This scanning fiber endoscope (SFE) or catheterscope enables high quality, laser-based, video imaging for ultrathin clinical applications while also providing new options for in vivo biological research of subsurface tissue and high resolution fluorescence imaging. PMID:20336702

  3. Close-Range Tracking of Underwater Vehicles Using Light Beacons

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David

    2016-01-01

    This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time. PMID:27023547

  4. Close-Range Tracking of Underwater Vehicles Using Light Beacons.

    PubMed

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David

    2016-03-25

    This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time.

  5. A novel visual-inertial monocular SLAM

    NASA Astrophysics Data System (ADS)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  6. Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method

    PubMed Central

    Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter

    2015-01-01

    An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254

  7. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  8. A systematic investigation of navigation impairment in chronic stroke patients: Evidence for three distinct types.

    PubMed

    Claessen, Michiel H G; Visser-Meily, Johanna M A; Meilinger, Tobias; Postma, Albert; de Rooij, Nicolien K; van der Ham, Ineke J M

    2017-08-01

    In a recent systematic review, Claessen and van der Ham (2017) have analyzed the types of navigation impairment in the single-case study literature. Three dissociable types related to landmarks, locations, and paths were identified. This recent model as well as previous models of navigation impairment have never been verified in a systematic manner. The aim of the current study was thus to investigate the prevalence of landmark-based, location-based, and path-based navigation impairment in a large sample of stroke patients. Navigation ability of 77 stroke patients in the chronic phase and 60 healthy participants was comprehensively evaluated using the Virtual Tübingen test, which contains twelve subtasks addressing various aspects of knowledge about landmarks, locations, and paths based on a newly learned virtual route. Participants also filled out the Wayfinding Questionnaire to allow for making a distinction between stroke patients with and without significant subjective navigation-related complaints. Analysis of responses on the Wayfinding Questionnaire indicated that 33 of the 77 participating stroke patients had significant navigation-related complaints. An examination of their performance on the Virtual Tübingen test established objective evidence for navigation impairment in 27 patients. Both landmark-based and path-based navigation impairment occurred in isolation, while location-based navigation impairment was only found along with the other two types. The current study provides the first empirical support for the distinction between landmark-based, location-based, and path-based navigation impairment. Future research relying on other assessment instruments of navigation ability might be helpful to further validate this distinction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Charting the Visual Space of Insect Eyes - Delineating the Guidance, Navigation and Control of Insect Flight by Their Optical Sensor

    DTIC Science & Technology

    2014-06-01

    B. Beetle wing colors Whereas most insect wings are rather thin and flexible chitinous structures, in beetles this holds for only one wing pair...symbols). The black line is the dispersion curve for insect chitin . D. Insect photoreceptors Insect vision starts with the absorption of light by the...BD (2012) Sexual dichromatism of the damselfly Calopteryx japonica caused by a melanin- chitin multilayer in the male wing veins. PLoS ONE 7: e49743

  10. Non-GPS Navigation Using Vision-Aiding and Active Radio Range Measurements

    DTIC Science & Technology

    2011-03-01

    describes unbal - anced compliance of the gyroscope’s float assembly along the input and spin axes. The remaining three error sources are the scale- factor ...Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is not subject to copyright... work . You are my Rock. Next, I’d like to thank my advisor for challenging me and giving me the oppor- tunity to do my best. I would also like to

  11. Modulation and Coding for NASA's New Space Communications Architecture

    NASA Technical Reports Server (NTRS)

    Deutsch, Leslie J.; Stocklin, Frank J.; Rush, John J.

    2008-01-01

    With the release in 2006 of NASA's Space Communications and Navigation Architecture, the agency defined its vision for the future in these areas. The results reported in this paper help define the myriad communications links included in this architecture through the year 2030. While these results represent the work of multiple NASA Centers and some of the best experts in the Agency, this is only a first step toward developing international telecommunication link standards that will take the world into the next era of space exploration.

  12. Inertial Navigation System Aiding Using Vision

    DTIC Science & Technology

    2013-03-01

    abp a + Cba d dt ( pa ) + d dt ( rbba ) (2.11) vb = d dt ( rbba ) + Cba (Ω a... abp a + va) (2.12) where ddt (r b ba) accounts for the relative velocity betwwen the a-frame and b-frame, CbaΩaabp a is the instantaneous velocity of p...frame. Taking another time derivative of Eq. 2.12 results in: d dt ( vb ) , ab = d2 dt2 rbba + d dt [ Cba (Ω a abp a + va) ] (2.13) = r̈bba + dCba

  13. Integrated long-range UAV/UGV collaborative target tracking

    NASA Astrophysics Data System (ADS)

    Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv

    2009-05-01

    Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.

  14. Integrated vision-based GNC for autonomous rendezvous and capture around Mars

    NASA Astrophysics Data System (ADS)

    Strippoli, L.; Novelli, G.; Gil Fernandez, J.; Colmenarejo, P.; Le Peuvedic, C.; Lanza, P.; Ankersen, F.

    2015-06-01

    Integrated GNC (iGNC) is an activity aimed at designing, developing and validating the GNC for autonomously performing the rendezvous and capture phase of the Mars sample return mission as defined during the Mars sample return Orbiter (MSRO) ESA study. The validation cycle includes testing in an end-to-end simulator, in a real-time avionics-representative test bench and, finally, in a dynamic HW in the loop test bench for assessing the feasibility, performances and figure of merits of the baseline approach defined during the MSRO study, for both nominal and contingency scenarios. The on-board software (OBSW) is tailored to work with the sensors, actuators and orbits baseline proposed in MSRO. The whole rendezvous is based on optical navigation, aided by RF-Doppler during the search and first orbit determination of the orbiting sample. The simulated rendezvous phase includes also the non-linear orbit synchronization, based on a dedicated non-linear guidance algorithm robust to Mars ascent vehicle (MAV) injection accuracy or MAV failures resulting in elliptic target orbits. The search phase is very demanding for the image processing (IP) due to the very high visual magnitude of the target wrt. the stellar background, and the attitude GNC requires very high pointing stability accuracies to fulfil IP constraints. A trade-off of innovative, autonomous navigation filters indicates the unscented Kalman filter (UKF) as the approach that provides the best results in terms of robustness, response to non-linearities and performances compatibly with computational load. At short range, an optimized IP based on a convex hull algorithm has been developed in order to guarantee LoS and range measurements from hundreds of metres to capture.

  15. Heads up and camera down: a vision-based tracking modality for mobile mixed reality.

    PubMed

    DiVerdi, Stephen; Höllerer, Tobias

    2008-01-01

    Anywhere Augmentation pursues the goal of lowering the initial investment of time and money necessary to participate in mixed reality work, bridging the gap between researchers in the field and regular computer users. Our paper contributes to this goal by introducing the GroundCam, a cheap tracking modality with no significant setup necessary. By itself, the GroundCam provides high frequency, high resolution relative position information similar to an inertial navigation system, but with significantly less drift. We present the design and implementation of the GroundCam, analyze the impact of several design and run-time factors on tracking accuracy, and consider the implications of extending our GroundCam to different hardware configurations. Motivated by the performance analysis, we developed a hybrid tracker that couples the GroundCam with a wide area tracking modality via a complementary Kalman filter, resulting in a powerful base for indoor and outdoor mobile mixed reality work. To conclude, the performance of the hybrid tracker and its utility within mixed reality applications is discussed.

  16. Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.

    PubMed

    López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth

    2010-08-01

    In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.

  17. Three-dimensional simulation, surgical navigation and thoracoscopic lung resection

    PubMed Central

    Kanzaki, Masato; Kikkawa, Takuma; Sakamoto, Kei; Maeda, Hideyuki; Wachi, Naoko; Komine, Hiroshi; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa

    2013-01-01

    This report describes a 3-dimensional (3-D) video-assisted thoracoscopic lung resection guided by a 3-D video navigation system having a patient-specific 3-D reconstructed pulmonary model obtained by preoperative simulation. A 78-year-old man was found to have a small solitary pulmonary nodule in the left upper lobe in chest computed tomography. By a virtual 3-D pulmonary model the tumor was found to be involved in two subsegments (S1 + 2c and S3a). Complete video-assisted thoracoscopic surgery bi-subsegmentectomy was selected in simulation and was performed with lymph node dissection. A 3-D digital vision system was used for 3-D thoracoscopic performance. Wearing 3-D glasses, the patient's actual reconstructed 3-D model on 3-D liquid-crystal displays was observed, and the 3-D intraoperative field and the picture of 3-D reconstructed pulmonary model were compared. PMID:24964426

  18. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  19. Regionalized Lunar South Pole Surface Navigation System Analysis

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2008-01-01

    Apollo missions utilized Earth-based assets for navigation because the landings took place at lunar locations in constant view from the Earth. The new exploration campaign to the lunar south pole region will have limited Earth visibility, but the extent to which a navigation system comprised solely of Earth-based tracking stations will provide adequate navigation solutions in this region is unknown. This report presents a dilution-of-precision (DoP)-based, stationary surface navigation analysis of the performance of multiple lunar satellite constellations, Earth-based deep space network assets, and combinations thereof. Results show that kinematic and integrated solutions cannot be provided by the Earth-based deep space network stations. Also, the stationary surface navigation system needs to be operated either as a two-way navigation system or as a one-way navigation system with local terrain information, while the position solution is integrated over a short duration of time with navigation signals being provided by a lunar satellite constellation.

  20. Conceptual Design of a Communication-Based Deep Space Navigation Network

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan J.; Chuang, C. H.

    2012-01-01

    As the need grows for increased autonomy and position knowledge accuracy to support missions beyond Earth orbit, engineers must push and develop more advanced navigation sensors and systems that operate independent of Earth-based analysis and processing. Several spacecraft are approaching this problem using inter-spacecraft radiometric tracking and onboard autonomous optical navigation methods. This paper proposes an alternative implementation to aid in spacecraft position fixing. The proposed method Network-Based Navigation technique takes advantage of the communication data being sent between spacecraft and between spacecraft and ground control to embed navigation information. The navigation system uses these packets to provide navigation estimates to an onboard navigation filter to augment traditional ground-based radiometric tracking techniques. As opposed to using digital signal measurements to capture inherent information of the transmitted signal itself, this method relies on the embedded navigation packet headers to calculate a navigation estimate. This method is heavily dependent on clock accuracy and the initial results show the promising performance of a notional system.

  1. Vision-based sensing for autonomous in-flight refueling

    NASA Astrophysics Data System (ADS)

    Scott, D.; Toal, M.; Dale, J.

    2007-04-01

    A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.

  2. Road following for blindBike: an assistive bike navigation system for low vision persons

    NASA Astrophysics Data System (ADS)

    Grewe, Lynne; Overell, William

    2017-05-01

    Road Following is a critical component of blindBike, our assistive biking application for the visually impaired. This paper talks about the overall blindBike system and goals prominently featuring Road Following, which is the task of directing the user to follow the right side of the road. This work unlike what is commonly found for self-driving cars does not depend on lane line markings. 2D computer vision techniques are explored to solve the problem of Road Following. Statistical techniques including the use of Gaussian Mixture Models are employed. blindBike is developed as an Android Application and is running on a smartphone device. Other sensors including Gyroscope and GPS are utilized. Both Urban and suburban scenarios are tested and results are given. The success and challenges faced by blindBike's Road Following module are presented along with future avenues of work.

  3. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  4. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  5. TUTORIAL: Development of a cortical visual neuroprosthesis for the blind: the relevance of neuroplasticity

    NASA Astrophysics Data System (ADS)

    Fernández, E.; Pelayo, F.; Romero, S.; Bongard, M.; Marin, C.; Alfaro, A.; Merabet, L.

    2005-12-01

    Clinical applications such as artificial vision require extraordinary, diverse, lengthy and intimate collaborations among basic scientists, engineers and clinicians. In this review, we present the state of research on a visual neuroprosthesis designed to interface with the occipital visual cortex as a means through which a limited, but useful, visual sense could be restored in profoundly blind individuals. We review the most important physiological principles regarding this neuroprosthetic approach and emphasize the role of neural plasticity in order to achieve desired behavioral outcomes. While full restoration of fine detailed vision with current technology is unlikely in the immediate near future, the discrimination of shapes and the localization of objects should be possible allowing blind subjects to navigate in a unfamiliar environment and perhaps even to read enlarged text. Continued research and development in neuroprosthesis technology will likely result in a substantial improvement in the quality of life of blind and visually impaired individuals.

  6. Manifold learning in machine vision and robotics

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  7. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  8. Foot clearance and variability in mono- and multifocal intraocular lens users during stair navigation.

    PubMed

    Renz, Erik; Hackney, Madeleine; Hall, Courtney

    2016-01-01

    Intraocular lenses (IOLs) provide distance and near refraction and are becoming the standard for cataract surgery. Multifocal glasses increase variability of toe clearance in older adults navigating stairs and increase fall risk; however, little is known about the biomechanics of stair navigation in individuals with multifocal IOLs. This study compared clearance while ascending and descending stairs in individuals with monofocal versus multifocal IOLs. Eight participants with multifocal IOLs (4 men, 4 women; mean age = 66.5 yr, standard deviation [SD] = 6.26) and fifteen male participants with monofocal IOLs (mean age = 69.9 yr, SD = 6.9) underwent vision and mobility testing. Motion analysis recorded kinematic and custom software-calculated clearances in three-dimensional space. No significant differences were found between groups on minimum clearance or variability. Clearance differed for ascending versus descending stairs: the first step onto the stair had the greatest toe clearance during ascent, whereas the final step to the floor had the greatest heel clearance during descent. This preliminary study indicates that multifocal IOLs have similar biomechanic characteristics to monofocal IOLs. Given that step characteristics are related to fall risk, we can tentatively speculate that multifocal IOLs may carry no additional fall risk.

  9. Visual Place Learning in Drosophila melanogaster

    PubMed Central

    Ofstad, Tyler A.; Zuker, Charles S.; Reiser, Michael B.

    2011-01-01

    The ability of insects to learn and navigate to specific locations in the environment has fascinated naturalists for decades. While the impressive navigation abilities of ants, bees, wasps, and other insects clearly demonstrate that insects are capable of visual place learning1–4, little is known about the underlying neural circuits that mediate these behaviors. Drosophila melanogaster is a powerful model organism for dissecting the neural circuitry underlying complex behaviors, from sensory perception to learning and memory. Flies can identify and remember visual features such as size, color, and contour orientation5, 6. However, the extent to which they use vision to recall specific locations remains unclear. Here we describe a visual place-learning platform and demonstrate that Drosophila are capable of forming and retaining visual place memories to guide selective navigation. By targeted genetic silencing of small subsets of cells in the Drosophila brain we show that neurons in the ellipsoid body, but not in the mushroom bodies, are necessary for visual place learning. Together, these studies reveal distinct neuroanatomical substrates for spatial versus non-spatial learning, and substantiate Drosophila as a powerful model for the study of spatial memories. PMID:21654803

  10. Scene Segmentation For Autonomous Robotic Navigation Using Sequential Laser Projected Structured Light

    NASA Astrophysics Data System (ADS)

    Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.

    1987-01-01

    Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.

  11. Real-time Terrain Relative Navigation Test Results from a Relevant Environment for Mars Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Cheng, Yang; Montgomery, James; Trawny, Nikolas; Tweddle, Brent; Zheng, Jason

    2015-01-01

    Terrain Relative Navigation (TRN) is an on-board GN&C function that generates a position estimate of a spacecraft relative to a map of a planetary surface. When coupled with a divert, the position estimate enables access to more challenging landing sites through pin-point landing or large hazard avoidance. The Lander Vision System (LVS) is a smart sensor system that performs terrain relative navigation by matching descent camera imagery to a map of the landing site and then fusing this with inertial measurements to obtain high rate map relative position, velocity and attitude estimates. A prototype of the LVS was recently tested in a helicopter field test over Mars analog terrain at altitudes representative of Mars Entry Descent and Landing conditions. TRN ran in real-time on the LVS during the flights without human intervention or tuning. The system was able to compute estimates accurate to 40m (3 sigma) in 10 seconds on a flight like processing system. This paper describes the Mars operational test space definition, how the field test was designed to cover that operational envelope, the resulting TRN performance across the envelope and an assessment of test space coverage.

  12. Physics-based simulations of aerial attacks by peregrine falcons reveal that stooping at high speed maximizes catch success against agile prey

    PubMed Central

    Hildenbrandt, Hanno

    2018-01-01

    The peregrine falcon Falco peregrinus is renowned for attacking its prey from high altitude in a fast controlled dive called a stoop. Many other raptors employ a similar mode of attack, but the functional benefits of stooping remain obscure. Here we investigate whether, when, and why stooping promotes catch success, using a three-dimensional, agent-based modeling approach to simulate attacks of falcons on aerial prey. We simulate avian flapping and gliding flight using an analytical quasi-steady model of the aerodynamic forces and moments, parametrized by empirical measurements of flight morphology. The model-birds’ flight control inputs are commanded by their guidance system, comprising a phenomenological model of its vision, guidance, and control. To intercept its prey, model-falcons use the same guidance law as missiles (pure proportional navigation); this assumption is corroborated by empirical data on peregrine falcons hunting lures. We parametrically vary the falcon’s starting position relative to its prey, together with the feedback gain of its guidance loop, under differing assumptions regarding its errors and delay in vision and control, and for three different patterns of prey motion. We find that, when the prey maneuvers erratically, high-altitude stoops increase catch success compared to low-altitude attacks, but only if the falcon’s guidance law is appropriately tuned, and only given a high degree of precision in vision and control. Remarkably, the optimal tuning of the guidance law in our simulations coincides closely with what has been observed empirically in peregrines. High-altitude stoops are shown to be beneficial because their high airspeed enables production of higher aerodynamic forces for maneuvering, and facilitates higher roll agility as the wings are tucked, each of which is essential to catching maneuvering prey at realistic response delays. PMID:29649207

  13. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization

    PubMed Central

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413

  14. A Novel Augmented Reality Navigation System for Endoscopic Sinus and Skull Base Surgery: A Feasibility Study

    PubMed Central

    Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei

    2016-01-01

    Objective To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. Materials and Methods In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. Results The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. Conclusion The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon’s skills and knowledge, not as a substitute. PMID:26757365

  15. A Novel Augmented Reality Navigation System for Endoscopic Sinus and Skull Base Surgery: A Feasibility Study.

    PubMed

    Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei

    2016-01-01

    To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon's skills and knowledge, not as a substitute.

  16. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Insect Responses to Linearly Polarized Reflections: Orphan Behaviors Without Neural Circuits

    PubMed Central

    Heinloth, Tanja; Uhlhorn, Juliane; Wernet, Mathias F.

    2018-01-01

    The e-vector orientation of linearly polarized light represents an important visual stimulus for many insects. Especially the detection of polarized skylight by many navigating insect species is known to improve their orientation skills. While great progress has been made towards describing both the anatomy and function of neural circuit elements mediating behaviors related to navigation, relatively little is known about how insects perceive non-celestial polarized light stimuli, like reflections off water, leaves, or shiny body surfaces. Work on different species suggests that these behaviors are not mediated by the “Dorsal Rim Area” (DRA), a specialized region in the dorsal periphery of the adult compound eye, where ommatidia contain highly polarization-sensitive photoreceptor cells whose receptive fields point towards the sky. So far, only few cases of polarization-sensitive photoreceptors have been described in the ventral periphery of the insect retina. Furthermore, both the structure and function of those neural circuits connecting to these photoreceptor inputs remain largely uncharacterized. Here we review the known data on non-celestial polarization vision from different insect species (dragonflies, butterflies, beetles, bugs and flies) and present three well-characterized examples for functionally specialized non-DRA detectors from different insects that seem perfectly suited for mediating such behaviors. Finally, using recent advances from circuit dissection in Drosophila melanogaster, we discuss what types of potential candidate neurons could be involved in forming the underlying neural circuitry mediating non-celestial polarization vision. PMID:29615868

  18. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    NASA Astrophysics Data System (ADS)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  19. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates.

    PubMed

    Barberis, Lucas; Peruani, Fernando

    2016-12-09

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  20. Evaluation of the attentional capacities and working memory of early and late blind persons.

    PubMed

    Pigeon, Caroline; Marin-Lamellet, Claude

    2015-02-01

    Although attentional processes and working memory seem to be significantly involved in the daily activities (particularly during navigating) of persons who are blind and who use these abilities to compensate for their lack of vision, few studies have investigated these mechanisms in this population. The aim of this study is to evaluate the selective, sustained and divided attention, attentional inhibition and switching and working memory of blind persons. Early blind, late blind and sighted participants completed neuropsychological tests that were designed or adapted to be achievable in the absence of vision. The results revealed that the early blind participants outperformed the sighted ones in selective, sustained and divided attention and working memory tests, and the late blind participants outperformed the sighted participants in selective, sustained and divided attention. However, no differences were found between the blind groups and the sighted group in the attentional inhibition and switching tests. Furthermore, no differences were found between the early and late blind participants in this set of tests. These results suggest that early and late blind persons can compensate for the lack of vision by an enhancement of the attentional and working memory capacities. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    NASA Astrophysics Data System (ADS)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  2. Human Exploration and Avionic Technology Challenges

    NASA Technical Reports Server (NTRS)

    Benjamin, Andrew L.

    2005-01-01

    For this workshop, I will identify critical avionic gaps, enabling technologies, high-pay off investment opportunities, promising capabilities, and space applications for human lunar and Mars exploration. Key technology disciplines encompass fault tolerance, miniaturized instrumentation sensors, MEMS-based guidance, navigation, and controls, surface communication networks, and rendezvous and docking. Furthermore, I will share bottom-up strategic planning relevant to manned mission -driven needs. Blending research expertise, facilities, and personnel with internal NASA is vital to stimulating collaborative technology solutions that achieve NASA grand vision. Retaining JSC expertise in unique and critical areas is paramount to our long-term success. Civil servants will maintain key roles in setting technology agenda, ensuring quality results, and integrating technologies into avionic systems and manned missions. Finally, I will present to NASA, academia, and the aerospace community some on -going and future advanced avionic technology programs and activities that are relevant to our mission goals and objectives.

  3. A systematic review on US-based community health navigator (CHN) interventions for cancer screening promotion--comparing community- versus clinic-based navigator models.

    PubMed

    Hou, Su-I; Roberson, Kiersten

    2015-03-01

    This study synthesized lessons learned from US-based community and clinic health navigator (CHN) interventions on cancer screening promotion to identify characteristics of models and approaches for addressing cancer disparities. The combination terms "cancer screening" and "community health workers or navigators" or "patient navigators" were used in searching Medline, CINAHL, and PsycInfo. A total of 27 articles published during January 2005∼April 2014 were included. Two CHN models were identified: community-based (15 studies) and clinic/hospital-based (12 studies). While both models used the term "navigators," most community-based programs referred them as community health workers/navigators/advisors, whereas clinic-based programs often called them patient navigators. Most community-based CHN interventions targeted specific racial/ethnic minority or rural groups, while clinic-based programs mostly targeted urban low income or mixed ethnic groups. Most community-based CHN programs outreached members from community networks, while clinic-based programs commonly worked with pre-identified in-service clients. Overall, regardless model type, CHNs had similar roles and responsibilities, and interventions demonstrated effective outcomes. Our review identified characteristics of CHN interventions with attention to different settings. Lessons learned have implication on the dissemination and implementation of CHN interventions for cancer screening promotion across setting and target groups.

  4. Acceptance of a community-based navigator program for cancer control among urban African Americans.

    PubMed

    Halbert, Chanita Hughes; Briggs, Vanessa; Bowman, Marjorie; Bryant, Brenda; Bryant, Debbie Chatman; Delmoor, Ernestine; Ferguson, Monica; Ford, Marvella E; Johnson, Jerry C; Purnell, Joseph; Rogers, Rodney; Weathers, Benita

    2014-02-01

    Patient navigation is now a standard component of cancer care in many oncology facilities, but a fundamental question for navigator programs, especially in medically underserved populations, is whether or not individuals will use this service. In this study, we evaluated acceptance of a community-based navigator program for cancer control and identified factors having significant independent associations with navigation acceptance in an urban sample of African Americans. Participants were African American men and women ages 50-75 who were residents in an urban metropolitan city who were referred for navigation. Of 240 participants, 76% completed navigation. Age and perceived risk of developing cancer had a significant independent association with navigation acceptance. Participants who believed that they were at high risk for developing cancer had a lower likelihood of completing navigation compared with those who believed that they had a low risk for developing this disease. The likelihood of completing navigation increased with increases in age. None of the socioeconomic factors or health care variables had a significant association with navigation acceptance. There are few barriers to using community-based navigation for cancer control among urban African Americans. Continued efforts are needed to develop and implement community-based programs for cancer control that are easy to use and address the needs of medically underserved populations.

  5. Acceptance of a community-based navigator program for cancer control among urban African Americans

    PubMed Central

    Halbert, Chanita Hughes; Briggs, Vanessa; Bowman, Marjorie; Bryant, Brenda; Bryant, Debbie Chatman; Delmoor, Ernestine; Ferguson, Monica; Ford, Marvella E.; Johnson, Jerry C.; Purnell, Joseph; Rogers, Rodney; Weathers, Benita

    2014-01-01

    Patient navigation is now a standard component of cancer care in many oncology facilities, but a fundamental question for navigator programs, especially in medically underserved populations, is whether or not individuals will use this service. In this study, we evaluated acceptance of a community-based navigator program for cancer control and identified factors having significant independent associations with navigation acceptance in an urban sample of African Americans. Participants were African American men and women ages 50–75 who were residents in an urban metropolitan city who were referred for navigation. Of 240 participants, 76% completed navigation. Age and perceived risk of developing cancer had a significant independent association with navigation acceptance. Participants who believed that they were at high risk for developing cancer had a lower likelihood of completing navigation compared with those who believed that they had a low risk for developing this disease. The likelihood of completing navigation increased with increases in age. None of the socioeconomic factors or health care variables had a significant association with navigation acceptance. There are few barriers to using community-based navigation for cancer control among urban African Americans. Continued efforts are needed to develop and implement community-based programs for cancer control that are easy to use and address the needs of medically underserved populations. PMID:24173501

  6. 33 CFR 334.1215 - Port Gardner, Everett Naval Base, naval restricted area, Everett, Washington.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 3 2011-07-01 2011-07-01 false Port Gardner, Everett Naval Base, naval restricted area, Everett, Washington. 334.1215 Section 334.1215 Navigation and Navigable Waters... REGULATIONS § 334.1215 Port Gardner, Everett Naval Base, naval restricted area, Everett, Washington. (a) The...

  7. 33 CFR 165.1120 - Security Zone; Naval Amphibious Base, San Diego, CA.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Security Zone; Naval Amphibious Base, San Diego, CA. 165.1120 Section 165.1120 Navigation and Navigable Waters COAST GUARD, DEPARTMENT... § 165.1120 Security Zone; Naval Amphibious Base, San Diego, CA. (a) Location. The following area is a...

  8. 33 CFR 165.1120 - Security Zone; Naval Amphibious Base, San Diego, CA.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone; Naval Amphibious Base, San Diego, CA. 165.1120 Section 165.1120 Navigation and Navigable Waters COAST GUARD, DEPARTMENT... § 165.1120 Security Zone; Naval Amphibious Base, San Diego, CA. (a) Location. The following area is a...

  9. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  10. Human factors research on performance-based navigation instrument procedures for NextGEN

    DOT National Transportation Integrated Search

    2012-10-14

    Area navigation (RNAV) and required navigation performance (RNP) are key components of performance-based navigation (PBN). Instrument procedures that use RNAV and RNP can have more flexible and precise paths than conventional routes that are defined ...

  11. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  13. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot.

    PubMed

    Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-02-24

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.

  14. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot

    PubMed Central

    Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-01-01

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102

  15. 33 CFR 334.370 - Chesapeake Bay, Lynnhaven Roads; danger zones, U.S. Naval Amphibious Base.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Chesapeake Bay, Lynnhaven Roads; danger zones, U.S. Naval Amphibious Base. 334.370 Section 334.370 Navigation and Navigable Waters CORPS... REGULATIONS § 334.370 Chesapeake Bay, Lynnhaven Roads; danger zones, U.S. Naval Amphibious Base. (a...

  16. 33 CFR 157.304 - Shore-based reception facility: standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Shore-based reception facility: standards. 157.304 Section 157.304 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... CARRYING OIL IN BULK Exemption From § 157.10a or § 157.10c § 157.304 Shore-based reception facility...

  17. 33 CFR 157.304 - Shore-based reception facility: standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Shore-based reception facility: standards. 157.304 Section 157.304 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... CARRYING OIL IN BULK Exemption From § 157.10a or § 157.10c § 157.304 Shore-based reception facility...

  18. A GPU-accelerated cortical neural network model for visually guided robot navigation.

    PubMed

    Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L

    2015-12-01

    Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. VLSI chips for vision-based vehicle guidance

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1994-02-01

    Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.

  20. 33 CFR 165.776 - Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico 165.776 Section 165.776 Navigation and Navigable Waters COAST... Guard District § 165.776 Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico (a...

  1. 33 CFR 165.776 - Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico 165.776 Section 165.776 Navigation and Navigable Waters COAST... Guard District § 165.776 Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico (a...

  2. 33 CFR 165.776 - Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico. 165.776 Section 165.776 Navigation and Navigable Waters COAST... Guard District § 165.776 Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico. (a...

  3. 33 CFR 165.776 - Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico. 165.776 Section 165.776 Navigation and Navigable Waters COAST... Guard District § 165.776 Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico. (a...

  4. 33 CFR 334.900 - Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 3 2011-07-01 2011-07-01 false Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. 334.900 Section 334.900 Navigation and Navigable Waters... REGULATIONS § 334.900 Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. (a) The...

  5. 33 CFR 334.900 - Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 3 2013-07-01 2013-07-01 false Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. 334.900 Section 334.900 Navigation and Navigable Waters... REGULATIONS § 334.900 Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. (a) The...

  6. 33 CFR 334.900 - Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 3 2014-07-01 2014-07-01 false Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. 334.900 Section 334.900 Navigation and Navigable Waters... REGULATIONS § 334.900 Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. (a) The...

  7. 33 CFR 334.900 - Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 3 2012-07-01 2012-07-01 false Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. 334.900 Section 334.900 Navigation and Navigable Waters... REGULATIONS § 334.900 Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. (a) The...

  8. 33 CFR 334.900 - Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. 334.900 Section 334.900 Navigation and Navigable Waters... REGULATIONS § 334.900 Pacific Ocean, U.S. Marine Corps Base, Camp Pendleton, Calif.; restricted area. (a) The...

  9. Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward

    2000-01-01

    This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.

  10. Cognitive object recognition system (CORS)

    NASA Astrophysics Data System (ADS)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  11. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.

  12. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia

    PubMed Central

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.

    2015-01-01

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745

  13. Deployment Effects of Marine Renewable Energy Technologies: Wave Energy Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirko Previsic

    2010-06-17

    Given proper care in siting, design, deployment, operation and maintenance, wave energy conversion could become one of the more environmentally benign sources of electricity generation. In order to accelerate the adoption of these emerging hydrokinetic and marine energy technologies, navigational and environmental concerns must be identified and addressed. All developing hydrokinetic projects involve a wide variety of stakeholders. One of the key issues that site developers face as they engage with this range of stakeholders is that, due to a lack of technical certainty, many of the possible conflicts (e.g., shipping and fishing) and environmental issues are not well-understood,. Inmore » September 2008, re vision consulting, LLC was selected by the Department of Energy (DoE) to apply a scenario-based assessment to the emerging hydrokinetic technology sector in order to evaluate the potential impact of these technologies on the marine environment and navigation constraints. The project’s scope of work includes the establishment of baseline scenarios for wave and tidal power conversion at potential future deployment sites. The scenarios capture variations in technical approaches and deployment scales to properly identify and characterize environmental effects and navigational effects. The goal of the project is to provide all stakeholders with an improved understanding of the potential range of technical attributes and potential effects of these emerging technologies and focus all stakeholders on the critical issues that need to be addressed. By identifying and addressing navigational and environmental concerns in the early stages of the industry’s development, serious mistakes that could potentially derail industry-wide development can be avoided. This groundwork will also help in streamlining siting and associated permitting processes, which are considered key hurdles for the industry’s development in the U.S. today. Re vision is coordinating its efforts with two other project teams funded by DoE which are focused on regulatory issues (Pacific Energy Ventures) and navigational issues (PCCI). The results of this study are structured into three reports: (1) Wave power scenario description (2) Tidal power scenario description (3) Framework for Identifying Key Environmental Concerns This is the first report in the sequence and describes the results of conceptual feasibility studies of wave power plants deployed in Humboldt County, California and Oahu, Hawaii. These two sites contain many of the same competing stakeholder interactions identified at other wave power sites in the U.S. and serve as representative case studies. Wave power remains at an early stage of development. As such, a wide range of different technologies are being pursued by different manufacturers. In order to properly characterize potential effects, it is useful to characterize the range of technologies that could be deployed at the site of interest. An industry survey informed the process of selecting representative wave power devices. The selection criteria requires that devices are at an advanced stage of development to reduce technical uncertainties, and that enough data are available from the manufacturers to inform the conceptual design process of this study. Further, an attempt is made to cover the range of different technologies under development to capture variations in potential environmental effects. Table 1 summarizes the selected wave power technologies. A number of other developers are also at an advanced stage of development, but are not directly mentioned here. Many environmental effects will largely scale with the size of the wave power plant. In many cases, the effects of a single device may not be measurable, while larger scale device arrays may have cumulative impacts that differ significantly from smaller scale deployments. In order to characterize these effects, scenarios are established at three deployment scales which nominally represent (1) a small pilot deployment, (2) a small commercial deployment, and (3) a large commercial scale plant. It is important to understand that the purpose of this study was to establish baseline scenarios based on basic device data that was provided to use by the manufacturer for illustrative purposes only.« less

  14. Screen Miniatures as Icons for Backward Navigation in Content-Based Software.

    ERIC Educational Resources Information Center

    Boling, Elizabeth; Ma, Guoping; Tao, Chia-Wen; Askun, Cengiz; Green, Tim; Frick, Theodore; Schaumburg, Heike

    Users of content-based software programs, including hypertexts and instructional multimedia, rely on the navigation functions provided by the designers of those program. Typical navigation schemes use abstract symbols (arrows) to label basic navigational functions like moving forward or backward through screen displays. In a previous study, the…

  15. 33 CFR 334.740 - Weekley Bayou, an arm of Boggy Bayou, Fla., at Eglin Air Force Base; restricted area.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Weekley Bayou, an arm of Boggy Bayou, Fla., at Eglin Air Force Base; restricted area. 334.740 Section 334.740 Navigation and Navigable... REGULATIONS § 334.740 Weekley Bayou, an arm of Boggy Bayou, Fla., at Eglin Air Force Base; restricted area. (a...

  16. 33 CFR 334.740 - Weekley Bayou, an arm of Boggy Bayou, Fla., at Eglin Air Force Base; restricted area.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 3 2011-07-01 2011-07-01 false Weekley Bayou, an arm of Boggy Bayou, Fla., at Eglin Air Force Base; restricted area. 334.740 Section 334.740 Navigation and Navigable... REGULATIONS § 334.740 Weekley Bayou, an arm of Boggy Bayou, Fla., at Eglin Air Force Base; restricted area. (a...

  17. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  18. Machine Vision for Relative Spacecraft Navigation During Approach to Docking

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong; Baker, Kenneth

    2011-01-01

    This paper describes a machine vision system for relative spacecraft navigation during the terminal phase of approach to docking that: 1) matches high contrast image features of the target vehicle, as seen by a camera that is bore-sighted to the docking adapter on the chase vehicle, to the corresponding features in a 3d model of the docking adapter on the target vehicle and 2) is robust to on-orbit lighting. An implementation is provided for the case of the Space Shuttle Orbiter docking to the International Space Station (ISS) with quantitative test results using a full scale, medium fidelity mock-up of the ISS docking adapter mounted on a 6-DOF motion platform at the NASA Marshall Spaceflight Center Flight Robotics Laboratory and qualitative test results using recorded video from the Orbiter Docking System Camera (ODSC) during multiple orbiter to ISS docking missions. The Natural Feature Image Registration (NFIR) system consists of two modules: 1) Tracking which tracks the target object from image to image and estimates the position and orientation (pose) of the docking camera relative to the target object and 2) Acquisition which recognizes the target object if it is in the docking camera Field-of-View and provides an approximate pose that is used to initialize tracking. Detected image edges are matched to the 3d model edges whose predicted location, based on the pose estimate and its first time derivative from the previous frame, is closest to the detected edge1 . Mismatches are eliminated using a rigid motion constraint. The remaining 2d image to 3d model matches are used to make a least squares estimate of the change in relative pose from the previous image to the current image. The changes in position and in attitude are used as data for two Kalman filters whose outputs are smoothed estimate of position and velocity plus attitude and attitude rate that are then used to predict the location of the 3d model features in the next image.

  19. Technologies Render Views of Earth for Virtual Navigation

    NASA Technical Reports Server (NTRS)

    2012-01-01

    On a December night in 1995, 159 passengers and crewmembers died when American Airlines Flight 965 flew into the side of a mountain while in route to Cali, Colombia. A key factor in the tragedy: The pilots had lost situational awareness in the dark, unfamiliar terrain. They had no idea the plane was approaching a mountain until the ground proximity warning system sounded an alarm only seconds before impact. The accident was of the kind most common at the time CFIT, or controlled flight into terrain says Trey Arthur, research aerospace engineer in the Crew Systems and Aviation Operations Branch at NASA s Langley Research Center. In situations such as bad weather, fog, or nighttime flights, pilots would rely on airspeed, altitude, and other readings to get an accurate sense of location. Miscalculations and rapidly changing conditions could contribute to a fully functioning, in-control airplane flying into the ground. To improve aviation safety by enhancing pilots situational awareness even in poor visibility, NASA began exploring the possibilities of synthetic vision creating a graphical display of the outside terrain on a screen inside the cockpit. How do you display a mountain in the cockpit? You have to have a graphics-powered computer, a terrain database you can render, and an accurate navigation solution, says Arthur. In the mid-1990s, developing GPS technology offered a means for determining an aircraft s position in space with high accuracy, Arthur explains. As the necessary technologies to enable synthetic vision emerged, NASA turned to an industry partner to develop the terrain graphical engine and database for creating the virtual rendering of the outside environment.

  20. Guidance and Navigation Requirements for Unmanned Flyby and Swingby Missions to the Outer Planets. Volume 3; Low Thrust Missions, Phase B

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The guidance and navigation requirements for unmanned missions to the outer planets, assuming constant, low thrust, ion propulsion are discussed. The navigational capability of the ground based Deep Space Network is compared to the improvements in navigational capability brought about by the addition of guidance and navigation related onboard sensors. Relevant onboard sensors include: (1) the optical onboard navigation sensor, (2) the attitude reference sensors, and (3) highly sensitive accelerometers. The totally ground based, and the combination ground based and onboard sensor systems are compared by means of the estimated errors in target planet ephemeris, and the spacecraft position with respect to the planet.

  1. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  2. [The history of optical signals for traffic regulation].

    PubMed

    Draeger, J; Harsch, V

    2008-04-01

    For signal transmission in traffic today, different optical, acoustic, or other physical or technical means are used for information. The different kinds of traffic (water navigation, road and rail, and, later air transport) made traffic regulation necessary early on. This regulation, from its very beginning in ancient times, began by means of optical signals; nowadays, this remains the most important method. From the very start, minimum requirements for the navigator's vision, color discrimination, dark adaptation, and even visual field were needed. For historical reasons, it was in seafaring medicine that these first developed. Besides the development of the different signals, methods for checking the requirements were soon developed. National and international requirements have been very different. Only within the last 50 years has international cooperation led to the acceptance of general standards for the different traffic modes. This article discusses the technical development of optical signals for the different kinds of traffic, from ancient times to the present, and explains the development of minimum requirements for the different visual functions.

  3. Wireless physiological monitoring and ocular tracking: 3D calibration in a fully-immersive virtual health care environment.

    PubMed

    Zhang, Lelin; Chi, Yu Mike; Edelstein, Eve; Schulze, Jurgen; Gramann, Klaus; Velasquez, Alvaro; Cauwenberghs, Gert; Macagno, Eduardo

    2010-01-01

    Wireless physiological/neurological monitoring in virtual reality (VR) offers a unique opportunity for unobtrusively quantifying human responses to precisely controlled and readily modulated VR representations of health care environments. Here we present such a wireless, light-weight head-mounted system for measuring electrooculogram (EOG) and electroencephalogram (EEG) activity in human subjects interacting with and navigating in the Calit2 StarCAVE, a five-sided immersive 3-D visualization VR environment. The system can be easily expanded to include other measurements, such as cardiac activity and galvanic skin responses. We demonstrate the capacity of the system to track focus of gaze in 3-D and report a novel calibration procedure for estimating eye movements from responses to the presentation of a set of dynamic visual cues in the StarCAVE. We discuss cyber and clinical applications that include a 3-D cursor for visual navigation in VR interactive environments, and the monitoring of neurological and ocular dysfunction in vision/attention disorders.

  4. Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits

    NASA Astrophysics Data System (ADS)

    Snider, Ross K.; Arathorn, David W.

    2006-05-01

    A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.

  5. Area navigation and required navigation performance procedures and depictions

    DOT National Transportation Integrated Search

    2012-09-30

    Area navigation (RNAV) and required navigation performance (RNP) procedures are fundamental to the implementation of a performance based navigation (PBN) system, which is a key enabling technology for the Next Generation Air Transportation System (Ne...

  6. The remarkable visual capacities of nocturnal insects: vision at the limits with small eyes and tiny brains

    PubMed Central

    2017-01-01

    Nocturnal insects have evolved remarkable visual capacities, despite small eyes and tiny brains. They can see colour, control flight and land, react to faint movements in their environment, navigate using dim celestial cues and find their way home after a long and tortuous foraging trip using learned visual landmarks. These impressive visual abilities occur at light levels when only a trickle of photons are being absorbed by each photoreceptor, begging the question of how the visual system nonetheless generates the reliable signals needed to steer behaviour. In this review, I attempt to provide an answer to this question. Part of the answer lies in their compound eyes, which maximize light capture. Part lies in the slow responses and high gains of their photoreceptors, which improve the reliability of visual signals. And a very large part lies in the spatial and temporal summation of these signals in the optic lobe, a strategy that substantially enhances contrast sensitivity in dim light and allows nocturnal insects to see a brighter world, albeit a slower and coarser one. What is abundantly clear, however, is that during their evolution insects have overcome several serious potential visual limitations, endowing them with truly extraordinary night vision. This article is part of the themed issue ‘Vision in dim light’. PMID:28193808

  7. A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision

    PubMed Central

    Nau, Amy Catherine; Pintar, Christine; Fisher, Christopher; Jeong, Jong-Hyeon; Jeong, KwonHo

    2014-01-01

    We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration. Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS) PMID:24561717

  8. Juno Mission Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Weidner, Richard J.

    2008-01-01

    The Juno spacecraft is planned to launch in August of 2012 and would arrive at Jupiter four years later. The spacecraft would spend more than one year orbiting the planet and investigating the existence of an ice-rock core; determining the amount of global water and ammonia present in the atmosphere, studying convection and deep- wind profiles in the atmosphere; investigating the origin of the Jovian magnetic field, and exploring the polar magnetosphere. Juno mission management is responsible for mission and navigation design, mission operation planning, and ground-data-system development. In order to ensure successful mission management from initial checkout to final de-orbit, it is critical to share a common vision of the entire mission operation phases with the rest of the project teams. Two major challenges are 1) how to develop a shared vision that can be appreciated by all of the project teams of diverse disciplines and expertise, and 2) how to continuously evolve a shared vision as the project lifecycle progresses from formulation phase to operation phase. The Juno mission simulation team addresses these challenges by developing agile and progressive mission models, operation simulations, and real-time visualization products. This paper presents mission simulation visualization network (MSVN) technology that has enabled a comprehensive mission simulation suite (MSVN-Juno) for the Juno project.

  9. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  10. Autonomous Robotic Inspection in Tunnels

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.

    2016-06-01

    In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.

  11. Door detection in images based on learning by components

    NASA Astrophysics Data System (ADS)

    Cicirelli, Grazia; D'Orazio, Tiziana; Ancona, Nicola

    2001-10-01

    In this paper we present a vision-based technique for detecting targets of the environment which has to be reached by an autonomous mobile robot during its navigational task. The targets the robot has to reach are the doors of our office building. Color and shape information are used as identifying features for detecting principal components of the door. In fact in images the door can appear of different dimensions depending on the attitude of the robot with respect to the door, therefore detection of the door is performed by detecting its most significant components in the image. Positive and negative examples, in form of image patterns, are manually selected from real images for training two neural classifiers in order to recognize the single components. Each classifier has been realized by a feed-forward neural network with one hidden layer and sigmoid activation function. Moreover for selecting negative examples, relevant for the problem at hand, a bootstrap technique has been used during the training process. Finally the detecting system has been applied to several test real images for evaluating its performance.

  12. 33 CFR 334.560 - Banana River at Patrick Air Force Base, Fla.; restricted area.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 3 2014-07-01 2014-07-01 false Banana River at Patrick Air Force Base, Fla.; restricted area. 334.560 Section 334.560 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE DANGER ZONE AND RESTRICTED AREA REGULATIONS § 334.560 Banana...

  13. Research on the error model of airborne celestial/inertial integrated navigation system

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaoqiang; Deng, Xiaoguo; Yang, Xiaoxu; Dong, Qiang

    2015-02-01

    Celestial navigation subsystem of airborne celestial/inertial integrated navigation system periodically correct the positioning error and heading drift of the inertial navigation system, by which the inertial navigation system can greatly improve the accuracy of long-endurance navigation. Thus the navigation accuracy of airborne celestial navigation subsystem directly decides the accuracy of the integrated navigation system if it works for long time. By building the mathematical model of the airborne celestial navigation system based on the inertial navigation system, using the method of linear coordinate transformation, we establish the error transfer equation for the positioning algorithm of airborne celestial system. Based on these we built the positioning error model of the celestial navigation. And then, based on the positioning error model we analyze and simulate the positioning error which are caused by the error of the star tracking platform with the MATLAB software. Finally, the positioning error model is verified by the information of the star obtained from the optical measurement device in range and the device whose location are known. The analysis and simulation results show that the level accuracy and north accuracy of tracking platform are important factors that limit airborne celestial navigation systems to improve the positioning accuracy, and the positioning error have an approximate linear relationship with the level error and north error of tracking platform. The error of the verification results are in 1000m, which shows that the model is correct.

  14. Biosonar navigation above water II: exploiting mirror images.

    PubMed

    Genzel, Daria; Hoffmann, Susanne; Prosch, Selina; Firzlaff, Uwe; Wiegrebe, Lutz

    2015-02-15

    As in vision, acoustic signals can be reflected by a smooth surface creating an acoustic mirror image. Water bodies represent the only naturally occurring horizontal and acoustically smooth surfaces. Echolocating bats flying over smooth water bodies encounter echo-acoustic mirror images of objects above the surface. Here, we combined an electrophysiological approach with a behavioral experimental paradigm to investigate whether bats can exploit echo-acoustic mirror images for navigation and how these mirrorlike echo-acoustic cues are encoded in their auditory cortex. In an obstacle-avoidance task where the obstacles could only be detected via their echo-acoustic mirror images, most bats spontaneously exploited these cues for navigation. Sonar ensonifications along the bats' flight path revealed conspicuous changes of the reflection patterns with slightly increased target strengths at relatively long echo delays corresponding to the longer acoustic paths from the mirrored obstacles. Recordings of cortical spatiotemporal response maps (STRMs) describe the tuning of a unit across the dimensions of elevation and time. The majority of cortical single and multiunits showed a special spatiotemporal pattern of excitatory areas in their STRM indicating a preference for echoes with (relative to the setup dimensions) long delays and, interestingly, from low elevations. This neural preference could effectively encode a reflection pattern as it would be perceived by an echolocating bat detecting an object mirrored from below. The current study provides both behavioral and neurophysiological evidence that echo-acoustic mirror images can be exploited by bats for obstacle avoidance. This capability effectively supports echo-acoustic navigation in highly cluttered natural habitats. Copyright © 2015 the American Physiological Society.

  15. Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee; Gandhi, Tarak; Hartman, Kerry; Yang, Mau-Tsuen

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design.

  16. Proposal for continued research in intelligent machines at the Center for Engineering Systems Advanced Research (CESAR) for FY 1988 to FY 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.

    1987-03-01

    This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.

  17. Taking a Concept to Commercialization: Designing Relevant Tests to Address Safety.

    PubMed

    Ferrara, Lisa A

    2016-04-01

    Taking a product from concept to commercialization requires careful navigation of the regulatory pathway through a series of steps: (A) moving the idea through proof of concept and beyond; (B) evaluating new technologies that may provide added value to the idea; (C) designing appropriate test strategies and protocols; and (D) evaluating and mitigating risks. Moving an idea from the napkin stage of development to the final product requires a team effort. When finished, the product rarely resembles the original design, but careful steps throughout the product life cycle ensure that the product meets the vision.

  18. Navigating Declining Budgets, Political Hurdles: A New Vision for the Future of Geoscience

    NASA Astrophysics Data System (ADS)

    Gagosian, Robert B.

    2013-06-01

    The Oklahoma tornadoes, Superstorm Sandy, the Tohoku tsunami, and the Deepwater Horizon oil spill are just a few examples of oceanic, atmospheric, and other Earth system disasters in the past 3 years that together claimed thousands of lives and caused hundreds of billions of dollars of damage. Basic and applied research in the geosciences were essential in supporting early warnings and forecasts that were used not only to protect lives when these natural disasters struck but also to assess risks and help society to be better able to adapt and recover after disaster struck.

  19. Error Characterization of Vision-Aided Navigation Systems

    DTIC Science & Technology

    2013-03-01

    Cll .0 0 .8 0 ..... a_ 0.6 0 .4 0 .2 0 -8 X 10.3 3.5 3 ~ 2.5 (/) c (]) 0 >- 2 o= :..0 Cll .0 0 1.5 ..... a_ 0.5 Normalized...Normalized Histogram and Gaussian fit , E Pos Err, i = 560 3_5 ~ : 0.45247 cr : 3 1.0686 > 2.5 ;t::: Ill c::: (].) 0 2 > := :.c Cll -g 1_5...4.5 4 > ~ 3 (].) 0 ~ 2.5 :.c Cll ..c 2 e a.. 1.5 0 .5 -6 0 Error (m) 2 4 6 8 Figure 4.18: Normalized Down Position

  20. Combined CT-based and image-free navigation systems in TKA reduces postoperative outliers of rotational alignment of the tibial component.

    PubMed

    Mitsuhashi, Shota; Akamatsu, Yasushi; Kobayashi, Hideo; Kusayama, Yoshihiro; Kumagai, Ken; Saito, Tomoyuki

    2018-02-01

    Rotational malpositioning of the tibial component can lead to poor functional outcome in TKA. Although various surgical techniques have been proposed, precise rotational placement of the tibial component was difficult to accomplish even with the use of a navigation system. The purpose of this study is to assess whether combined CT-based and image-free navigation systems replicate accurately the rotational alignment of tibial component that was preoperatively planned on CT, compared with the conventional method. We compared the number of outliers for rotational alignment of the tibial component using combined CT-based and image-free navigation systems (navigated group) with those of conventional method (conventional group). Seventy-two TKAs were performed between May 2012 and December 2014. In the navigated group, the anteroposterior axis was prepared using CT-based navigation system and the tibial component was positioned under control of the navigation. In the conventional group, the tibial component was placed with reference to the Akagi line that was determined visually. Fisher's exact probability test was performed to evaluate the results. There was a significant difference between the two groups with regard to the number of outliers: 3 outliers in the navigated group compared with 12 outliers in the conventional group (P < 0.01). We concluded that combined CT-based and image-free navigation systems decreased the number of rotational outliers of tibial component, and was helpful for the replication of the accurate rotational alignment of the tibial component that was preoperatively planned.

Top