Sample records for multi-uav visual navigation

  1. Survey of computer vision technology for UVA navigation

    NASA Astrophysics Data System (ADS)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  2. An Unmanned Aerial Vehicle Cluster Network Cruise System for Monitor

    NASA Astrophysics Data System (ADS)

    Jiang, Jirong; Tao, Jinpeng; Xin, Guipeng

    2018-06-01

    The existing maritime cruising system mainly uses manned motorboats to monitor the quality of coastal water and patrol and maintenance of the navigation -aiding facility, which has the problems of high energy consumption, small range of cruise for monitoring, insufficient information control and low visualization. In recent years, the application of UAS in the maritime field has alleviated the phenomenon above to some extent. A cluster-based unmanned network monitoring cruise system designed in this project uses the floating small UAV self-powered launching platform as a carrier, applys the idea of cluster, and combines the strong controllability of the multi-rotor UAV and the capability to carry customized modules, constituting a unmanned, visualized and normalized monitoring cruise network to realize the functions of maritime cruise, maintenance of navigational-aiding and monitoring the quality of coastal water.

  3. Closing the Gap Between Research and Field Applications for Multi-UAV Cooperative Missions

    DTIC Science & Technology

    2013-09-01

    IMU Inertial Measurement Units INCOSE International Council on Systems Engineering ISR Intelligence Surveillance and Reconnaissance ISTAR...light-weight and low-cost inertial measurement units ( IMUs ) are widely adopted for navigation of small- scale UAVs. Low-costs IMUs are characterized...by high measurement noises and large measurement biases. Hence pure initial navigation using low-cost IMUs drifts rapidly. In practice, inertial

  4. Searching Lost People with Uavs: the System and Results of the Close-Search Project

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Skaloud, J.; Kornus, W.; Prades, R.; Aguilera, C.

    2012-07-01

    This paper will introduce the goals, concept and results of the project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost Search-And-Rescue (SAR) operations'. The main goal is to integrate a medium-size, helicopter-type Unmanned Aerial Vehicle (UAV), a thermal imaging sensor and an EGNOS-based multi-sensor navigation system, including an Autonomous Integrity Monitoring (AIM) capability, to support search operations in difficult-to-access areas and/or night operations. The focus of the paper is three-fold. Firstly, the operational and technical challenges of the proposed approach are discussed, such as ultra-safe multi-sensor navigation system, the use of combined thermal and optical vision (infrared plus visible) for person recognition and Beyond-Line-Of-Sight communications among others. Secondly, the implementation of the integrity concept for UAV platforms is discussed herein through the AIM approach. Based on the potential of the geodetic quality analysis and on the use of the European EGNOS system as a navigation performance starting point, AIM approaches integrity from the precision standpoint; that is, the derivation of Horizontal and Vertical Protection Levels (HPLs, VPLs) from a realistic precision estimation of the position parameters is performed and compared to predefined Alert Limits (ALs). Finally, some results from the project test campaigns are described to report on particular project achievements. Together with actual Search-and-Rescue teams, the system was operated in realistic, user-chosen test scenarios. In this context, and specially focusing on the EGNOS-based UAV navigation, the AIM capability and also the RGB/thermal imaging subsystem, a summary of the results is presented.

  5. A kind of graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation

    NASA Astrophysics Data System (ADS)

    Xie, Bing; Duan, Zhemin; Chen, Yu

    2017-11-01

    The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.

  6. Squeezeposenet: Image Based Pose Regression with Small Convolutional Neural Networks for Real Time Uas Navigation

    NASA Astrophysics Data System (ADS)

    Müller, M. S.; Urban, S.; Jutzi, B.

    2017-08-01

    The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.

  7. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  8. Cruise Missile Penaid Nonproliferation: Hindering the Spread of Countermeasures Against Cruise Missile Defenses

    DTIC Science & Technology

    2014-01-01

    this report treats cruise missile penaids and UAV penaids, sometimes called “self-protection” (see La Franchi , 2004), interchangeably. 8 Cruise...Penaid Export Controls 41 2. Anti-Jam Equipment MTCR Item 11.A.3.b.3 (Avionics): Current text: “Receiving equipment for Global Navigation Satellite...subsystems beyond those for global navigation satellite systems to all sensor, navigation, and communications systems, and add “including multi-mode

  9. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments †

    PubMed Central

    Guerra, Edmundo

    2018-01-01

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation. PMID:29701722

  10. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    PubMed

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  11. The UAV take-off and landing system used for small areas of mobile vehicles

    NASA Astrophysics Data System (ADS)

    Ren, Tian-Yu; Duanmu, Qing-Duo; Wu, Bo-Qi

    2018-03-01

    In order to realize an UAV formation cluster system based on the current GPS and the fault and insufficiency of Beidou integrated navigation system in strong jamming environment. Due to the impact of the compass on the plane crash, navigation system error caused by the mobile area to help reduce the need for large landing sites and not in the small fast moving area to achieve the reality of the landing. By using Strapdown inertial and all-optical system to form Composite UAV flight control system, the photoelectric composite strapdown inertial coupling is realized, and through the laser and microwave telemetry link compound communication mechanism, using all-optical strapdown inertial and visual navigation system to solve the deviation of take-off and landing caused by electromagnetic interference, all-optical bidirectional data link realizes two-way position correction of landing site and aircraft, thus achieves the accurate recovery of UAV formation cluster in the mobile narrow area which the traditional navigation system can't realize. This system is a set of efficient unmanned aerial vehicle Group Take-off/descending system, which is suitable for many tasks, and not only realizes the reliable continuous navigation under the complex electromagnetic interference environment, moreover, the intelligent flight and Take-off and landing of unmanned aerial vehicles relative to the fast moving and small recovery sites in complex electromagnetic interference environment can not only improve the safe operation rate of unmanned aerial vehicle, but also guarantee the operation safety of the aircraft, and the more has important social value for the application foreground of the aircraft.

  12. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  13. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  14. Enabling UAV Navigation with Sensor and Environmental Uncertainty in Cluttered and GPS-Denied Environments

    PubMed Central

    Vanegas, Fernando; Gonzalez, Felipe

    2016-01-01

    Unmanned Aerial Vehicles (UAV) can navigate with low risk in obstacle-free environments using ground control stations that plan a series of GPS waypoints as a path to follow. This GPS waypoint navigation does however become dangerous in environments where the GPS signal is faulty or is only present in some places and when the airspace is filled with obstacles. UAV navigation then becomes challenging because the UAV uses other sensors, which in turn generate uncertainty about its localisation and motion systems, especially if the UAV is a low cost platform. Additional uncertainty affects the mission when the UAV goal location is only partially known and can only be discovered by exploring and detecting a target. This navigation problem is established in this research as a Partially-Observable Markov Decision Process (POMDP), so as to produce a policy that maps a set of motion commands to belief states and observations. The policy is calculated and updated on-line while flying with a newly-developed system for UAV Uncertainty-Based Navigation (UBNAV), to navigate in cluttered and GPS-denied environments using observations and executing motion commands instead of waypoints. Experimental results in both simulation and real flight tests show that the UAV finds a path on-line to a region where it can explore and detect a target without colliding with obstacles. UBNAV provides a new method and an enabling technology for scientists to implement and test UAV navigation missions with uncertainty where targets must be detected using on-line POMDP in real flight scenarios. PMID:27171096

  15. Enabling UAV Navigation with Sensor and Environmental Uncertainty in Cluttered and GPS-Denied Environments.

    PubMed

    Vanegas, Fernando; Gonzalez, Felipe

    2016-05-10

    Unmanned Aerial Vehicles (UAV) can navigate with low risk in obstacle-free environments using ground control stations that plan a series of GPS waypoints as a path to follow. This GPS waypoint navigation does however become dangerous in environments where the GPS signal is faulty or is only present in some places and when the airspace is filled with obstacles. UAV navigation then becomes challenging because the UAV uses other sensors, which in turn generate uncertainty about its localisation and motion systems, especially if the UAV is a low cost platform. Additional uncertainty affects the mission when the UAV goal location is only partially known and can only be discovered by exploring and detecting a target. This navigation problem is established in this research as a Partially-Observable Markov Decision Process (POMDP), so as to produce a policy that maps a set of motion commands to belief states and observations. The policy is calculated and updated on-line while flying with a newly-developed system for UAV Uncertainty-Based Navigation (UBNAV), to navigate in cluttered and GPS-denied environments using observations and executing motion commands instead of waypoints. Experimental results in both simulation and real flight tests show that the UAV finds a path on-line to a region where it can explore and detect a target without colliding with obstacles. UBNAV provides a new method and an enabling technology for scientists to implement and test UAV navigation missions with uncertainty where targets must be detected using on-line POMDP in real flight scenarios.

  16. Development of Cloud-Based UAV Monitoring and Management System

    PubMed Central

    Itkin, Mason; Kim, Mihui; Park, Younghee

    2016-01-01

    Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation. PMID:27854267

  17. Development of Cloud-Based UAV Monitoring and Management System.

    PubMed

    Itkin, Mason; Kim, Mihui; Park, Younghee

    2016-11-15

    Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation.

  18. Integrating a High Resolution Optically Pumped Magnetometer with a Multi-Rotor UAV towards 3-D Magnetic Gradiometry

    NASA Astrophysics Data System (ADS)

    Braun, A.; Walter, C. A.; Parvar, K.

    2016-12-01

    The current platforms for collecting magnetic data include dense coverage, but low resolution traditional airborne surveys, and high resolution, but low coverage terrestrial surveys. Both platforms leave a critical observation gap between the ground surface and approximately 100m above ground elevation, which can be navigated efficiently by new technologies, such as Unmanned Aerial Vehicles (UAVs). Specifically, multi rotor UAV platforms provide the ability to sense the magnetic field in a full 3-D tensor, which increases the quality of data collected over other current platform types. Payload requirements and target requirements must be balanced to fully exploit the 3-D magnetic tensor. This study outlines the integration of a GEM Systems Cesium Vapour UAV Magnetometer, a Lightware SF-11 Laser Altimeter and a uBlox EVK-7P GPS module with a DJI s900 Multi Rotor UAV. The Cesium Magnetometer is suspended beneath the UAV platform by a cable of varying length. A set of surveys was carried out to optimize the sensor orientation, sensor cable length beneath the UAV and data collection methods of the GEM Systems Cesium Vapour UAV Magnetometer when mounted on the DJI s900. The target for these surveys is a 12 inch steam pipeline located approximately 2 feet below the ground surface. A systematic variation of cable length, sensor orientation and inclination was conducted. The data collected from the UAV magnetometer was compared to a terrestrial survey conducted with the GEM GST-19 Proton Procession Magnetometer at the same elevation, which also served a reference station. This allowed for a cross examination between the UAV system and a proven industry standard for magnetic field data collection. The surveys resulted in optimizing the above parameters based on minimizing instrument error and ensuring reliable data acquisition. The results demonstrate that optimizing the UAV magnetometer survey can yield to industry standard measurements.

  19. Research on fast algorithm of small UAV navigation in non-linear matrix reductionism method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Fang, Jiancheng; Sheng, Wei; Cao, Juanjuan

    2008-10-01

    The low Reynolds numbers of small UAV will result in unfavorable aerodynamic conditions to support controlled flight. And as operated near ground, the small UAV will be affected seriously by low-frequency interference caused by atmospheric disturbance. Therefore, the GNC system needs high frequency of attitude estimation and control to realize the steady of the UAV. In company with the dimensional of small UAV dwindling away, its GNC system is more and more taken embedded designing technology to reach the purpose of compactness, light weight and low power consumption. At the same time, the operational capability of GNC system also gets limit in a certain extent. Therefore, a kind of high speed navigation algorithm design becomes the imminence demand of GNC system. Aiming at such requirement, a kind of non-linearity matrix reduction approach is adopted in this paper to create a new high speed navigation algorithm which holds the radius of meridian circle and prime vertical circle as constant and linearizes the position matrix calculation formulae of navigation equation. Compared with normal navigation algorithm, this high speed navigation algorithm decreases 17.3% operand. Within small UAV"s mission radius (20km), the accuracy of position error is less than 0.13m. The results of semi-physical experiments and small UAV's auto pilot testing proved that this algorithm can realize high frequency attitude estimation and control. It will avoid low-frequency interference caused by atmospheric disturbance properly.

  20. Autonomous agricultural remote sensing systems with high spatial and temporal resolutions

    NASA Astrophysics Data System (ADS)

    Xiang, Haitao

    In this research, two novel agricultural remote sensing (RS) systems, a Stand-alone Infield Crop Monitor RS System (SICMRS) and an autonomous Unmanned Aerial Vehicles (UAV) based RS system have been studied. A high-resolution digital color and multi-spectral camera was used as the image sensor for the SICMRS system. An artificially intelligent (AI) controller based on artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) was developed. Morrow Plots corn field RS images in the 2004 and 2006 growing seasons were collected by the SICMRS system. The field site contained 8 subplots (9.14 m x 9.14 m) that were planted with corn and three different fertilizer treatments were used among those subplots. The raw RS images were geometrically corrected, resampled to 10cm resolution, removed soil background and calibrated to real reflectance. The RS images from two growing seasons were studied and 10 different vegetation indices were derived from each day's image. The result from the image processing demonstrated that the vegetation indices have temporal effects. To achieve high quality RS data, one has to utilize the right indices and capture the images at the right time in the growing season. Maximum variations among the image data set are within the V6-V10 stages, which indicated that these stages are the best period to identify the spatial variability caused by the nutrient stress in the corn field. The derived vegetation indices were also used to build yield prediction models via the linear regression method. At that point, all of the yield prediction models were evaluated by comparing the R2-value and the best index model from each day's image was picked based on the highest R 2-value. It was shown that the green normalized difference vegetation (GNDVI) based model is more sensitive to yield prediction than other indices-based models. During the VT-R4 stages, the GNDVI based models were able to explain more than 95% potential corn yield consistently for both seasons. The VT-R4 stages are the best period of time to estimate the corn yield. The SICMS system is only suitable for the RS research at a fixed location. In order to provide more flexibility of the RS image collection, a novel UAV based system has been studied. The UAV based agricultural RS system used a light helicopter platform equipped with a multi-spectral camera. The UAV control system consisted of an on-board and a ground station subsystem. For the on-board subsystem, an Extended Kalman Filter (EKF) based UAV navigation system was designed and implemented. The navigation system, using low cost inertial sensors, magnetometer, GPS and a single board computer, was capable of providing continuous estimates of UAV position and attitude at 50 Hz using sensor fusion techniques. The ground station subsystem was designed to be an interface between a human operator and the UAV to implement mission planning, flight command activation, and real-time flight monitoring. The navigation system is controlled by the ground station, and able to navigate the UAV in the air to reach the predefined waypoints and trigger the multi-spectral camera. By so doing, the aerial images at each point could be captured automatically. The developed UAV RS system can provide a maximum flexibility in crop field RS image collection. It is essential to perform the geometric correction and the geocoding before an aerial image can be used for precision farming. An automatic (no Ground Control Point (GCP) needed) UAV image georeferencing algorithm was developed. This algorithm can do the automatic image correction and georeferencing based on the real-time navigation data and a camera lens distortion model. The accuracy of the georeferencing algorithm was better than 90 cm according to a series test. The accuracy that has been achieved indicates that, not only is the position solution good, but the attitude error is extremely small. The waypoints planning for UAV flight was investigated. It suggested that a 16.5% forward overlap and a 15% lateral overlap were required to avoiding missing desired mapping area when the UAV flies above 45 m high with 4.5 mm lens. A whole field mosaic image can be generated according to the individual image georeferencing information. A 0.569 m mosaic error has been achieved and this accuracy is sufficient for many of the intended precision agricultural applications. With careful interpretation, the UAV images are an excellent source of high spatial and temporal resolution data for precision agricultural applications. (Abstract shortened by UMI.)

  1. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor.

    PubMed

    Kikutis, Ramūnas; Stankūnas, Jonas; Rudinskas, Darius; Masiulionis, Tadas

    2017-09-28

    Current research on Unmanned Aerial Vehicles (UAVs) shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL) simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS) sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically.

  2. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor

    PubMed Central

    Kikutis, Ramūnas; Stankūnas, Jonas; Rudinskas, Darius; Masiulionis, Tadas

    2017-01-01

    Current research on Unmanned Aerial Vehicles (UAVs) shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL) simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS) sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically. PMID:28956839

  3. Autonomous Navigation of Small Uavs Based on Vehicle Dynamic Model

    NASA Astrophysics Data System (ADS)

    Khaghani, M.; Skaloud, J.

    2016-03-01

    This paper presents a novel approach to autonomous navigation for small UAVs, in which the vehicle dynamic model (VDM) serves as the main process model within the navigation filter. The proposed method significantly increases the accuracy and reliability of autonomous navigation, especially for small UAVs with low-cost IMUs on-board. This is achieved with no extra sensor added to the conventional INS/GNSS setup. This improvement is of special interest in case of GNSS outages, where inertial coasting drifts very quickly. In the proposed architecture, the solution to VDM equations provides the estimate of position, velocity, and attitude, which is updated within the navigation filter based on available observations, such as IMU data or GNSS measurements. The VDM is also fed with the control input to the UAV, which is available within the control/autopilot system. The filter is capable of estimating wind velocity and dynamic model parameters, in addition to navigation states and IMU sensor errors. Monte Carlo simulations reveal major improvements in navigation accuracy compared to conventional INS/GNSS navigation system during the autonomous phase, when satellite signals are not available due to physical obstruction or electromagnetic interference for example. In case of GNSS outages of a few minutes, position and attitude accuracy experiences improvements of orders of magnitude compared to inertial coasting. It means that during such scenario, the position-velocity-attitude (PVA) determination is sufficiently accurate to navigate the UAV to a home position without any signal that depends on vehicle environment.

  4. Implementation of Autonomous Navigation and Mapping using a Laser Line Scanner on a Tactical Unmanned Aerial Vehicle

    DTIC Science & Technology

    2011-12-01

    study new multi-agent algorithms to avoid collision and obstacles. Others, including Hanford et al. [2], have tried to build low-cost experimental...2007. [2] S. D. Hanford , L. N. Long, and J. F. Horn, “A Small Semi-Autonomous Rotary-Wing Unmanned Air Vehicle ( UAV ),” 2003 AIAA Atmospheric

  5. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    PubMed Central

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-01-01

    One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189

  6. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    PubMed

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  7. Vision and Control for UAVs: A Survey of General Methods and of Inexpensive Platforms for Infrastructure Inspection

    PubMed Central

    Máthé, Koppány; Buşoniu, Lucian

    2015-01-01

    Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608

  8. Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation

    PubMed Central

    Nitti, Davide O.; Bovenga, Fabio; Chiaradia, Maria T.; Greco, Mario; Pinelli, Gianpaolo

    2015-01-01

    This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimate UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system. PMID:26225977

  9. Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation.

    PubMed

    Nitti, Davide O; Bovenga, Fabio; Chiaradia, Maria T; Greco, Mario; Pinelli, Gianpaolo

    2015-07-28

    This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimated UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system.

  10. Egnos-Based Multi-Sensor Accurate and Reliable Navigation in Search-And Missions with Uavs

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Stebler, Y.; Skaloud, J.; Kornus, W.; Prades, R.

    2011-09-01

    This paper will introduce and describe the goals, concept and overall approach of the European 7th Framework Programme's project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost SAR operations'. The goal of CLOSE-SEARCH is to integrate in a helicopter-type unmanned aerial vehicle, a thermal imaging sensor and a multi-sensor navigation system (based on the use of a Barometric Altimeter (BA), a Magnetometer (MAGN), a Redundant Inertial Navigation System (RINS) and an EGNOS-enabled GNSS receiver) with an Autonomous Integrity Monitoring (AIM) capability, to support the search component of Search-And-Rescue operations in remote, difficult-to-access areas and/or in time critical situations. The proposed integration will result in a hardware and software prototype that will demonstrate an end-to-end functionality, that is to fly in patterns over a region of interest (possibly inaccessible) during day or night and also under adverse weather conditions and locate there disaster survivors or lost people through the detection of the body heat. This paper will identify the technical challenges of the proposed approach, from navigating with a BA/MAGN/RINS/GNSS-EGNOSbased integrated system to the interpretation of thermal images for person identification. Moreover, the AIM approach will be described together with the proposed integrity requirements. Finally, this paper will show some results obtained in the project during the first test campaign performed on November 2010. On that day, a prototype was flown in three different missions to assess its high-level performance and to observe some fundamental mission parameters as the optimal flying height and flying speed to enable body recognition. The second test campaign is scheduled for the end of 2011.

  11. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    PubMed

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  12. UAV Inspection of Electrical Transmission Infrastructure with Path Conformance Autonomy and Lidar-Based Geofences NASA Report on UTM Reference Mission Flights at Southern Company Flights November 2016

    NASA Technical Reports Server (NTRS)

    Moore, Andrew J.; Schubert, Matthew; Rymer, Nicholas; Balachandran, Swee; Consiglio, Maria; Munoz, Cesar; Smith, Joshua; Lewis, Dexter; Schneider, Paul

    2017-01-01

    Flights at low altitudes in close proximity to electrical transmission infrastructure present serious navigational challenges: GPS and radio communication quality is variable and yet tight position control is needed to measure defects while avoiding collisions with ground structures. To advance unmanned aerial vehicle (UAV) navigation technology while accomplishing a task with economic and societal benefit, a high voltage electrical infrastructure inspection reference mission was designed. An integrated air-ground platform was developed for this mission and tested in two days of experimental flights to determine whether navigational augmentation was needed to successfully conduct a controlled inspection experiment. The airborne component of the platform was a multirotor UAV built from commercial off-the-shelf hardware and software, and the ground component was a commercial laptop running open source software. A compact ultraviolet sensor mounted on the UAV can locate 'hot spots' (potential failure points in the electric grid), so long as the UAV flight path adequately samples the airspace near the power grid structures. To improve navigation, the platform was supplemented with two navigation technologies: lidar-to-polyhedron preflight processing for obstacle demarcation and inspection distance planning, and trajectory management software to enforce inspection standoff distance. Both navigation technologies were essential to obtaining useful results from the hot spot sensor in this obstacle-rich, low-altitude airspace. Because the electrical grid extends into crowded airspaces, the UAV position was tracked with NASA unmanned aerial system traffic management (UTM) technology. The following results were obtained: (1) Inspection of high-voltage electrical transmission infrastructure to locate 'hot spots' of ultraviolet emission requires navigation methods that are not broadly available and are not needed at higher altitude flights above ground structures. (2) The sensing capability of a novel airborne UV detector was verified with a standard ground-based instrument. Flights with this sensor showed that UAV measurement operations and recording methods are viable. With improved sensor range, UAVs equipped with compact UV sensors could serve as the detection elements in a self-diagnosing power grid. (3) Simplification of rich lidar maps to polyhedral obstacle maps reduces data volume by orders of magnitude, so that computation with the resultant maps in real time is possible. This enables real-time obstacle avoidance autonomy. Stable navigation may be feasible in the GPS-deprived environment near transmission lines by a UAV that senses ground structures and compares them to these simplified maps. (4) A new, formally verified path conformance software system that runs onboard a UAV was demonstrated in flight for the first time. It successfully maneuvered the aircraft after a sudden lateral perturbation that models a gust of wind, and processed lidar-derived polyhedral obstacle maps in real time. (5) Tracking of the UAV in the national airspace using the NASA UTM technology was a key safety component of this reference mission, since the flights were conducted beneath the landing approach to a heavily used runway. Comparison to autopilot tracking showed that UTM tracking accurately records the UAV position throughout the flight path.

  13. Latency Determination and Compensation in Real-Time Gnss/ins Integrated Navigation Systems

    NASA Astrophysics Data System (ADS)

    Solomon, P. D.; Wang, J.; Rizos, C.

    2011-09-01

    Unmanned Aerial Vehicle (UAV) technology is now commonplace in many defence and civilian environments. However, the high cost of owning and operating a sophisticated UAV has slowed their adoption in many commercial markets. Universities and research groups are actively experimenting with UAVs to further develop the technology, particularly for automated flying operations. The two main UAV platforms used are fixed-wing and helicopter. Helicopter-based UAVs offer many attractive features over fixed-wing UAVs, including vertical take-off, the ability to loiter, and highly dynamic flight. However the control and navigation of helicopters are significantly more demanding than those of fixed-wing UAVs and as such require a high bandwidth real-time Position, Velocity, Attitude (PVA) navigation system. In practical Real-Time Navigation Systems (RTNS) there are delays in the processing of the GNSS data prior to the fusion of the GNSS data with the INS measurements. This latency must be compensated for otherwise it degrades the solution of the navigation filter. This paper investigates the effect of latency in the arrival time of the GNSS data in a RTNS. Several test drives and flights were conducted with a low-cost RTNS, and compared with a high quality GNSS/INS solution. A technique for the real-time, automated and accurate estimation of the GNSS latency in low-cost systems was developed and tested. The latency estimates were then verified through cross-correlation with the time-stamped measurements from the reference system. A delayed measurement Extended Kalman Filter was then used to allow for the real-time fusing of the delayed measurements, and then a final system developed for on-the-fly measurement and compensation of GNSS latency in a RTNS.

  14. Visual navigation of the UAVs on the basis of 3D natural landmarks

    NASA Astrophysics Data System (ADS)

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-12-01

    This work considers the tracking of the UAV (unmanned aviation vehicle) on the basis of onboard observations of natural landmarks including azimuth and elevation angles. It is assumed that UAV's cameras are able to capture the angular position of reference points and to measure the angles of the sight line. Such measurements involve the real position of UAV in implicit form, and therefore some of nonlinear filters such as Extended Kalman filter (EKF) or others must be used in order to implement these measurements for UAV control. Recently it was shown that modified pseudomeasurement method may be used to control UAV on the basis of the observation of reference points assigned along the UAV path in advance. However, the use of such set of points needs the cumbersome recognition procedure with the huge volume of on-board memory. The natural landmarks serving as such reference points which may be determined on-line can significantly reduce the on-board memory and the computational difficulties. The principal difference of this work is the usage of the 3D reference points coordinates which permits to determine the position of the UAV more precisely and thereby to guide along the path with higher accuracy which is extremely important for successful performance of the autonomous missions. The article suggests the new RANSAC for ISOMETRY algorithm and the use of recently developed estimation and control algorithms for tracking of given reference path under external perturbation and noised angular measurements.

  15. An Application of UAV Attitude Estimation Using a Low-Cost Inertial Navigation System

    NASA Technical Reports Server (NTRS)

    Eure, Kenneth W.; Quach, Cuong Chi; Vazquez, Sixto L.; Hogge, Edward F.; Hill, Boyd L.

    2013-01-01

    Unmanned Aerial Vehicles (UAV) are playing an increasing role in aviation. Various methods exist for the computation of UAV attitude based on low cost microelectromechanical systems (MEMS) and Global Positioning System (GPS) receivers. There has been a recent increase in UAV autonomy as sensors are becoming more compact and onboard processing power has increased significantly. Correct UAV attitude estimation will play a critical role in navigation and separation assurance as UAVs share airspace with civil air traffic. This paper describes attitude estimation derived by post-processing data from a small low cost Inertial Navigation System (INS) recorded during the flight of a subscale commercial off the shelf (COTS) UAV. Two discrete time attitude estimation schemes are presented here in detail. The first is an adaptation of the Kalman Filter to accommodate nonlinear systems, the Extended Kalman Filter (EKF). The EKF returns quaternion estimates of the UAV attitude based on MEMS gyro, magnetometer, accelerometer, and pitot tube inputs. The second scheme is the complementary filter which is a simpler algorithm that splits the sensor frequency spectrum based on noise characteristics. The necessity to correct both filters for gravity measurement errors during turning maneuvers is demonstrated. It is shown that the proposed algorithms may be used to estimate UAV attitude. The effects of vibration on sensor measurements are discussed. Heuristic tuning comments pertaining to sensor filtering and gain selection to achieve acceptable performance during flight are given. Comparisons of attitude estimation performance are made between the EKF and the complementary filter.

  16. Co-Registration of DSMs Generated by Uav and Terrestrial Laser Scanning Systems

    NASA Astrophysics Data System (ADS)

    Ancil Persad, Ravi; Armenakis, Costas

    2016-06-01

    An approach for the co-registration of Digital Surface Models (DSMs) derived from Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) is proposed. Specifically, a wavelet-based feature descriptor for matching surface keypoints on the 2.5D DSMs is developed. DSMs are useful in wide-scope of various applications such as 3D building modelling and reconstruction, cultural heritage, urban and environmental planning, aircraft navigation/path routing, accident and crime scene reconstruction, mining as well as, topographic map revision and change detection. For these listed applications, it is not uncommon that there will be a need for automatically aligning multi-temporal DSMs which may have been acquired from multiple sensors, with different specifications over a period of time, and may have various overlaps. Terrestrial laser scanners usually capture urban facades in an accurate manner; however this is not the case for building roof structures. On the other hand, vertical photography from UAVs can capture the roofs. Therefore, the automatic fusion of UAV and laser-scanning based DSMs is addressed here as it serves various geospatial applications.

  17. Development and Validation of a Controlled Virtual Environment for Guidance, Navigation and Control of Quadrotor UAV

    DTIC Science & Technology

    2013-09-01

    Width Modulation QuarC Quanser Real-time Control RC Remote Controlled RPV Remotely Piloted Vehicles SLAM Simultaneous Localization and Mapping UAV...development of the following systems: 1. Navigation (GPS, Lidar , etc.) 2. Communication (Datalink) 3. Ground Control Station (GUI, software programming

  18. Development of a GPS/INS/MAG navigation system and waypoint navigator for a VTOL UAV

    NASA Astrophysics Data System (ADS)

    Meister, Oliver; Mönikes, Ralf; Wendel, Jan; Frietsch, Natalie; Schlaile, Christian; Trommer, Gert F.

    2007-04-01

    Unmanned aerial vehicles (UAV) can be used for versatile surveillance and reconnaissance missions. If a UAV is capable of flying automatically on a predefined path the range of possible applications is widened significantly. This paper addresses the development of the integrated GPS/INS/MAG navigation system and a waypoint navigator for a small vertical take-off and landing (VTOL) unmanned four-rotor helicopter with a take-off weight below 1 kg. The core of the navigation system consists of low cost inertial sensors which are continuously aided with GPS, magnetometer compass, and a barometric height information. Due to the fact, that the yaw angle becomes unobservable during hovering flight, the integration with a magnetic compass is mandatory. This integration must be robust with respect to errors caused by the terrestrial magnetic field deviation and interferences from surrounding electronic devices as well as ferrite metals. The described integration concept with a Kalman filter overcomes the problem that erroneous magnetic measurements yield to an attitude error in the roll and pitch axis. The algorithm provides long-term stable navigation information even during GPS outages which is mandatory for the flight control of the UAV. In the second part of the paper the guidance algorithms are discussed in detail. These algorithms allow the UAV to operate in a semi-autonomous mode position hold as well an complete autonomous waypoint mode. In the position hold mode the helicopter maintains its position regardless of wind disturbances which ease the pilot job during hold-and-stare missions. The autonomous waypoint navigator enable the flight outside the range of vision and beyond the range of the radio link. Flight test results of the implemented modes of operation are shown.

  19. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  20. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  1. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    PubMed

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  2. Precision Time Protocol-Based Trilateration for Planetary Navigation

    NASA Technical Reports Server (NTRS)

    Murdock, Ron

    2015-01-01

    Progeny Systems Corporation has developed a high-fidelity, field-scalable, non-Global Positioning System (GPS) navigation system that offers precision localization over communications channels. The system is bidirectional, providing position information to both base and mobile units. It is the first-ever wireless use of the Institute of Electrical and Electronics Engineers (IEEE) Precision Time Protocol (PTP) in a bidirectional trilateration navigation system. The innovation provides a precise and reliable navigation capability to support traverse-path planning systems and other mapping applications, and it establishes a core infrastructure for long-term lunar and planetary occupation. Mature technologies are integrated to provide navigation capability and to support data and voice communications on the same network. On Earth, the innovation is particularly well suited for use in unmanned aerial vehicles (UAVs), as it offers a non-GPS precision navigation and location service for use in GPS-denied environments. Its bidirectional capability provides real-time location data to the UAV operator and to the UAV. This approach optimizes assisted GPS techniques and can be used to determine the presence of GPS degradation, spoofing, or jamming.

  3. Lidar on small UAV for 3D mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael; Larsson, Hâkan

    2014-10-01

    Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities. Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive (multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of 20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the capability for target surface reflectivity estimation based on measurements on calibration standards. Initial results of the general mapping capability including the detection through partly obscured environments is demonstrated through field data collection and analysis.

  4. Diverse Planning for UAV Control and Remote Sensing

    PubMed Central

    Tožička, Jan; Komenda, Antonín

    2016-01-01

    Unmanned aerial vehicles (UAVs) are suited to various remote sensing missions, such as measuring air quality. The conventional method of UAV control is by human operators. Such an approach is limited by the ability of cooperation among the operators controlling larger fleets of UAVs in a shared area. The remedy for this is to increase autonomy of the UAVs in planning their trajectories by considering other UAVs and their plans. To provide such improvement in autonomy, we need better algorithms for generating alternative trajectory variants that the UAV coordination algorithms can utilize. In this article, we define a novel family of multi-UAV sensing problems, solving task allocation of huge number of tasks (tens of thousands) to a group of configurable UAVs with non-zero weight of equipped sensors (comprising the air quality measurement as well) together with two base-line solvers. To solve the problem efficiently, we use an algorithm for diverse trajectory generation and integrate it with a solver for the multi-UAV coordination problem. Finally, we experimentally evaluate the multi-UAV sensing problem solver. The evaluation is done on synthetic and real-world-inspired benchmarks in a multi-UAV simulator. Results show that diverse planning is a valuable method for remote sensing applications containing multiple UAVs. PMID:28009831

  5. Diverse Planning for UAV Control and Remote Sensing.

    PubMed

    Tožička, Jan; Komenda, Antonín

    2016-12-21

    Unmanned aerial vehicles (UAVs) are suited to various remote sensing missions, such as measuring air quality. The conventional method of UAV control is by human operators. Such an approach is limited by the ability of cooperation among the operators controlling larger fleets of UAVs in a shared area. The remedy for this is to increase autonomy of the UAVs in planning their trajectories by considering other UAVs and their plans. To provide such improvement in autonomy, we need better algorithms for generating alternative trajectory variants that the UAV coordination algorithms can utilize. In this article, we define a novel family of multi-UAV sensing problems, solving task allocation of huge number of tasks (tens of thousands) to a group of configurable UAVs with non-zero weight of equipped sensors (comprising the air quality measurement as well) together with two base-line solvers. To solve the problem efficiently, we use an algorithm for diverse trajectory generation and integrate it with a solver for the multi-UAV coordination problem. Finally, we experimentally evaluate the multi-UAV sensing problem solver. The evaluation is done on synthetic and real-world-inspired benchmarks in a multi-UAV simulator. Results show that diverse planning is a valuable method for remote sensing applications containing multiple UAVs.

  6. Multi-Gnss Receiver for Aerospace Navigation and Positioning Applications

    NASA Astrophysics Data System (ADS)

    Peres, T. R.; Silva, J. S.; Silva, P. F.; Carona, D.; Serrador, A.; Palhinha, F.; Pereira, R.; Véstias, M.

    2014-03-01

    The upcoming Galileo system opens a wide range of new opportunities in the Global Navigation Satellite System (GNSS) market. However, the characteristics of the future GNSS signals require the development of new GNSS receivers. In the frame of the REAGE project, DEIMOS and ISEL have developed a GNSS receiver targeted for aerospace applications, supporting current and future GPS L1 and Galileo E1 signals, based on commercial (or, in the furthest extent, industrial) grade components. Although the REAGE project aimed at space applications, the REAGE receiver is also applicable to many terrestrial applications (ground or airborne), such as Georeferencing and Unmanned Aerial Vehicle (UAV) navigation. This paper presents the architecture and features of the REAGE receiver, as well as some results of the validation campaign with GPS L1 and Galileo E1 signals.

  7. Performance Characteristic Mems-Based IMUs for UAVs Navigation

    NASA Astrophysics Data System (ADS)

    Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.

    2015-08-01

    Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.

  8. Multi-Unmanned Aerial Vehicle (UAV) Cooperative Fault Detection Employing Differential Global Positioning (DGPS), Inertial and Vision Sensors.

    PubMed

    Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal

    2009-01-01

    This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.

  9. Multi-UAV Collaborative Sensor Management for UAV Team Survivability

    DTIC Science & Technology

    2006-08-01

    Multi-UAV Collaborative Sensor Management for UAV Team Survivability Craig Stoneking, Phil DiBona , and Adria Hughes Lockheed Martin Advanced...Command, Aviation Applied Technology Directorate. REFERENCES [1] DiBona , P., Belov, N., Pawlowski, A. (2006). “Plan-Driven Fusion: Shaping the

  10. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  11. Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm

    PubMed Central

    Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis

    2016-01-01

    Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds. PMID:27827883

  12. Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm.

    PubMed

    Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis

    2016-11-03

    Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds.

  13. UAV State Estimation Modeling Techniques in AHRS

    NASA Astrophysics Data System (ADS)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  14. GPS navigation algorithms for Autonomous Airborne Refueling of Unmanned Air Vehicles

    NASA Astrophysics Data System (ADS)

    Khanafseh, Samer Mahmoud

    Unmanned Air Vehicles (UAVs) have recently generated great interest because of their potential to perform hazardous missions without risking loss of life. If autonomous airborne refueling is possible for UAVs, mission range and endurance will be greatly enhanced. However, concerns about UAV-tanker proximity, dynamic mobility and safety demand that the relative navigation system meets stringent requirements on accuracy, integrity, and continuity. In response, this research focuses on developing high-performance GPS-based navigation architectures for Autonomous Airborne Refueling (AAR) of UAVs. The AAR mission is unique because of the potentially severe sky blockage introduced by the tanker. To address this issue, a high-fidelity dynamic sky blockage model was developed and experimentally validated. In addition, robust carrier phase differential GPS navigation algorithms were derived, including a new method for high-integrity reacquisition of carrier cycle ambiguities for recently-blocked satellites. In order to evaluate navigation performance, world-wide global availability and sensitivity covariance analyses were conducted. The new navigation algorithms were shown to be sufficient for turn-free scenarios, but improvement in performance was necessary to meet the difficult requirements for a general refueling mission with banked turns. Therefore, several innovative methods were pursued to enhance navigation performance. First, a new theoretical approach was developed to quantify the position-domain integrity risk in cycle ambiguity resolution problems. A mechanism to implement this method with partially-fixed cycle ambiguity vectors was derived, and it was used to define tight upper bounds on AAR navigation integrity risk. A second method, where a new algorithm for optimal fusion of measurements from multiple antennas was developed, was used to improve satellite coverage in poor visibility environments such as in AAR. Finally, methods for using data-link extracted measurements as an additional inter-vehicle ranging measurement were also introduced. The algorithms and methods developed in this work are generally applicable to realize high-performance GPS-based navigation in partially obstructed environments. Navigation performance for AAR was quantified through covariance analysis, and it was shown that the stringent navigation requirements for this application are achievable. Finally, a real-time implementation of the algorithms was developed and successfully validated in autopiloted flight tests.

  15. Adaptive pattern for autonomous UAV guidance

    NASA Astrophysics Data System (ADS)

    Sung, Chen-Ko; Segor, Florian

    2013-09-01

    The research done at the Fraunhofer IOSB in Karlsruhe within the AMFIS project is focusing on a mobile system to support rescue forces in accidents or disasters. The system consists of a ground control station which has the capability to communicate with a large number of heterogeneous sensors and sensor carriers and provides several open interfaces to allow easy integration of additional sensors into the system. Within this research we focus mainly on UAV such as VTOL (Vertical takeoff and Landing) systems because of their ease of use and their high maneuverability. To increase the positioning capability of the UAV, different onboard processing chains of image exploitation for real time detection of patterns on the ground and the interfacing technology for controlling the UAV from the payload during flight were examined. The earlier proposed static ground pattern was extended by an adaptive component which admits an additional visual communication channel to the aircraft. For this purpose different components were conceived to transfer additive information using changeable patterns on the ground. The adaptive ground pattern and their application suitability had to be tested under external influence. Beside the adaptive ground pattern, the onboard process chains and the adaptations to the demands of changing patterns are introduced in this paper. The tracking of the guiding points, the UAV navigation and the conversion of the guiding point positions from the images to real world co-ordinates in video sequences, as well as use limits and the possibilities of an adaptable pattern are examined.

  16. Autonomous unmanned air vehicles (UAV) techniques

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Kai; Lee, Ting N.

    2007-04-01

    The UAVs (Unmanned Air Vehicles) have great potentials in different civilian applications, such as oil pipeline surveillance, precision farming, forest fire fighting (yearly), search and rescue, boarder patrol, etc. The related industries of UAVs can create billions of dollars for each year. However, the road block of adopting UAVs is that it is against FAA (Federal Aviation Administration) and ATC (Air Traffic Control) regulations. In this paper, we have reviewed the latest technologies and researches on UAV navigation and obstacle avoidance. We have purposed a system design of Jittering Mosaic Image Processing (JMIP) with stereo vision and optical flow to fulfill the functionalities of autonomous UAVs.

  17. Improving geolocation and spatial accuracies with the modular integrated avionics group (MIAG)

    NASA Astrophysics Data System (ADS)

    Johnson, Einar; Souter, Keith

    1996-05-01

    The modular integrated avionics group (MIAG) is a single unit approach to combining position, inertial and baro-altitude/air data sensors to provide optimized navigation, guidance and control performance. Lear Astronics Corporation is currently working within the navigation community to upgrade existing MIAG performance with precise GPS positioning mechanization tightly integrated with inertial, baro and other sensors. Among the immediate benefits are the following: (1) accurate target location in dynamic conditions; (2) autonomous launch and recovery using airborne avionics only; (3) precise flight path guidance; and (4) improved aircraft and payload stability information. This paper will focus on the impact of using the MIAG with its multimode navigation accuracies on the UAV targeting mission. Gimbaled electro-optical sensors mounted on a UAV can be used to determine ground coordinates of a target at the center of the field of view by a series of vector rotation and scaling computations. The accuracy of the computed target coordinates is dependent on knowing the UAV position and the UAV-to-target offset computation. Astronics performed a series of simulations to evaluate the effects that the improved angular and position data available from the MIAG have on target coordinate accuracy.

  18. Development of a Rotary Wing Unmanned Aerial Vehicle (UAV) Simulation Model

    DTIC Science & Technology

    2014-03-01

    Features Language URL Autopilot: DIY UAV - 2 DOF proportional controller - Kalman filtering C http://autopilot.sour ceforge.net Paperazzi - 3 DOF...proprtional controller - Basic navigation OCaml http://paparazzi.ena c.fr JSBSim - Basic control system blockset - Sample autopilot

  19. Colour-based Object Detection and Tracking for Autonomous Quadrotor UAV

    NASA Astrophysics Data System (ADS)

    Kadouf, Hani Hunud A.; Mohd Mustafah, Yasir

    2013-12-01

    With robotics becoming a fundamental aspect of modern society, further research and consequent application is ever increasing. Aerial robotics, in particular, covers applications such as surveillance in hostile military zones or search and rescue operations in disaster stricken areas, where ground navigation is impossible. The increased visual capacity of UAV's (Unmanned Air Vehicles) is also applicable in the support of ground vehicles to provide supplies for emergency assistance, for scouting purposes or to extend communication beyond insurmountable land or water barriers. The Quadrotor, which is a small UAV has its lift generated by four rotors and can be controlled by altering the speeds of its motors relative to each other. The four rotors allow for a higher payload than single or dual rotor UAVs, which makes it safer and more suitable to carry camera and transmitter equipment. An onboard camera is used to capture and transmit images of the Quadrotor's First Person View (FPV) while in flight, in real time, wirelessly to a base station. The aim of this research is to develop an autonomous quadrotor platform capable of transmitting real time video signals to a base station for processing. The result from the image analysis will be used as a feedback in the quadrotor positioning control. To validate the system, the algorithm should have the capacity to make the quadrotor identify, track or hover above stationary or moving objects.

  20. Demonstrating Acquisition of Real-Time Thermal Data Over Fires Utilizing UAVs

    NASA Technical Reports Server (NTRS)

    Ambrosia, Vincent G.; Wegener, Steven S.; Brass, James A.; Buechel, Sally W.; Peterson, David L. (Technical Monitor)

    2002-01-01

    A disaster mitigation demonstration, designed to integrate remote-piloted aerial platforms, a thermal infrared imaging payload, over-the-horizon (OTH) data telemetry and advanced image geo-rectification technologies was initiated in 2001. Project FiRE incorporates the use of a remotely piloted Uninhabited Aerial Vehicle (UAV), thermal imagery, and over-the-horizon satellite data telemetry to provide geo-corrected data over a controlled burn, to a fire management community in near real-time. The experiment demonstrated the use of a thermal multi-spectral scanner, integrated on a large payload capacity UAV, distributing data over-the-horizon via satellite communication telemetry equipment, and precision geo-rectification of the resultant data on the ground for data distribution to the Internet. The use of the UAV allowed remote-piloted flight (thereby reducing the potential for loss of human life during hazardous missions), and the ability to "finger and stare" over the fire for extended periods of time (beyond the capabilities of human-pilot endurance). Improved bit-rate capacity telemetry capabilities increased the amount, structure, and information content of the image data relayed to the ground. The integration of precision navigation instrumentation allowed improved accuracies in geo-rectification of the resultant imagery, easing data ingestion and overlay in a GIS framework. We focus on these technological advances and demonstrate how these emerging technologies can be readily integrated to support disaster mitigation and monitoring strategies regionally and nationally.

  1. Unmanned aerial vehicle observations of water surface elevation and bathymetry in the cenotes and lagoons of the Yucatan Peninsula, Mexico

    NASA Astrophysics Data System (ADS)

    Bandini, Filippo; Lopez-Tamayo, Alejandro; Merediz-Alonso, Gonzalo; Olesen, Daniel; Jakobsen, Jakob; Wang, Sheng; Garcia, Monica; Bauer-Gottwein, Peter

    2018-04-01

    Observations of water surface elevation (WSE) and bathymetry of the lagoons and cenotes of the Yucatán Peninsula (YP) in southeast Mexico are of hydrogeological interest. Observations of WSE (orthometric water height above mean sea level, amsl) are required to inform hydrological models, to estimate hydraulic gradients and groundwater flow directions. Measurements of bathymetry and water depth (elevation of the water surface above the bed of the water body) improve current knowledge on how lagoons and cenotes connect through the complicated submerged cave systems and the diffuse flow in the rock matrix. A novel approach is described that uses unmanned aerial vehicles (UAVs) to monitor WSE and bathymetry of the inland water bodies on the YP. UAV-borne WSE observations were retrieved using a radar and a global navigation satellite system on-board a multi-copter platform. Water depth was measured using a tethered floating sonar controlled by the UAV. This sonar provides depth measurements also in deep and turbid water. Bathymetry (wet-bed elevation amsl) can be computed by subtracting water depth from WSE. Accuracy of the WSE measurements is better than 5-7 cm and accuracy of the water depth measurements is estimated to be 3.8% of the actual water depth. The technology provided accurate measurements of WSE and bathymetry in both wetlands (lagoons) and cenotes. UAV-borne technology is shown to be a more flexible and lower cost alternative to manned aircrafts. UAVs allow monitoring of remote areas located in the jungle of the YP, which are difficult to access by human operators.

  2. Exploitation of Self Organization in UAV Swarms for Optimization in Combat Environments

    DTIC Science & Technology

    2008-03-01

    behaviors and entangled hierarchy into Swarmfare [59] UAV simulation environment to include these models. • Validate this new model’s success through...Figure 4.3. The hierarchy of control emerges from the entangled hierarchy of the state relations at the simulation , swarm and rule/behaviors level...majors, major) Abstract Model Types (AMT) Figure A.1: SO Abstract Model Type Table 142 Appendix B. Simulators Comparision Name MATLAB Multi UAV MultiUAV

  3. Visual signature reduction of unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Zhong, Z. W.; Ma, Z. X.; Jayawijayaningtiyas; Ngoh, J. H. H.

    2016-10-01

    With the emergence of unmanned aerial vehicles (UAVs) in multiple tactical defence missions, there was a need for an efficient visual signature suppression system for a more efficient stealth operation. One of our studies experimentally investigated the visual signature reduction of UAVs achieved through an active camouflage system. A prototype was constructed with newly developed operating software, Cloak, to provide active camouflage to the UAV model. The reduction of visual signature was analysed. Tests of the devices mounted on UAVs were conducted in another study. A series of experiments involved testing of the concept as well as the prototype. The experiments were conducted both in the laboratory and under normal environmental conditions. Results showed certain degrees of blending with the sky to create a camouflage effect. A mini-UAV made mostly out of transparent plastic was also designed and fabricated. Because of the transparency of the plastic material, the visibility of this UAV in the air is very small, and therefore the UAV is difficult to be detected. After re-designs and tests, eventually a practical system to reduce the visibility of UAVs viewed by human observers from the ground was developed. The system was evaluated during various outdoor tests. The scene target-to-background lightness contrast and the scene target-to-background colour contrast of the adaptive control system prototype were smaller than 10% at a stand-off viewing distance of 20-50 m.

  4. Detection and Learning of Unexpected Behaviors of Systems of Dynamical Systems by Using the Q2 Abstractions

    DTIC Science & Technology

    2017-11-01

    Finite State Machine ............................................... 21 9 Main Ontological Concepts for Representing Structure of a Multi -Agent...19 NetLogo Simulation of persistent surveillance of circular plume by 4 UAVs ........................36 20 Flocking Emergent Behaviors in Multi -UAV...Region) - Undesirable Group Formation ................................................................................... 40 24 Two UAVs Moving in

  5. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    PubMed

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  6. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles

    PubMed Central

    Wang, Xuan; Liu, Jinghong; Zhou, Qianfei

    2016-01-01

    In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145

  7. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.

    PubMed

    Wang, Xuan; Liu, Jinghong; Zhou, Qianfei

    2016-12-25

    In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.

  8. Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives

    PubMed Central

    Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  9. Migration strategies for service-enabling ground control stations for unmanned systems

    NASA Astrophysics Data System (ADS)

    Kroculick, Joseph B.

    2011-06-01

    Future unmanned systems will be integrated into the Global Information Grid (GIG) and support net-centric data sharing, where information in a domain is exposed to a wide variety of GIG stakeholders that can make use of the information provided. Adopting a Service-Oriented Architecture (SOA) approach to package reusable UAV control station functionality into common control services provides a number of benefits including enabling dynamic plug and play of components depending on changing mission requirements, supporting information sharing to the enterprise, and integrating information from authoritative sources such as mission planners with the UAV control stations data model. It also allows the wider enterprise community to use the services provided by unmanned systems and improve data quality to support more effective decision-making. We explore current challenges in migrating UAV control systems that manage multiple types of vehicles to a Service-Oriented Architecture (SOA). Service-oriented analysis involves reviewing legacy systems and determining which components can be made into a service. Existing UAV control stations provide audio/visual, navigation, and vehicle health and status information that are useful to C4I systems. However, many were designed to be closed systems with proprietary software and hardware implementations, message formats, and specific mission requirements. An architecture analysis can be performed that reviews legacy systems and determines which components can be made into a service. A phased SOA adoption approach can then be developed that improves system interoperability.

  10. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case studies have been conducted using a variety of point densities, terrain types and building densities. The results have been encouraging. More work is required for better processing of, for example, forested areas, buildings with sides that are not at right angles or are not straight, and single trees that impinge on buildings. Further work may also be required to ensure that the buildings extracted are of fully cartographic quality. A first version will be included in production software later in 2011. In addition to the standard geospatial applications and the UAV navigation, the results have a further advantage: since LiDAR data tends to be accurately georeferenced, the building models extracted can be used to refine image metadata whenever the same buildings appear in imagery for which the GPS/IMU values are poorer than those for the LiDAR.

  11. Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor Target Discovery and Tracking

    DTIC Science & Technology

    2010-01-01

    open garage leading to the building interior. The UAV is positioned north of a potential ingress to the building. As the mission begins, the UAV...camera, the difficulty in detecting and navigating around obstacles using this non- stereo camera necessitated a precomputed map of all obstacles and

  12. Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System.

    PubMed

    Hinas, Ajmal; Roberts, Jonathan M; Gonzalez, Felipe

    2017-12-17

    In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery of payloads or spot spraying applications in site-specific crop management. A downward-looking camera attached to a multirotor is used to find the target on the ground. The UAV descends to the target and hovers above the target for a few seconds to inspect the target. A high-level decision algorithm based on an OODA (observe, orient, decide, and act) loop was developed as a solution to address the problem. Navigation of the UAV was achieved by continuously sending local position messages to the autopilot via Mavros. The proposed system performed hovering above the target in three different stages: locate, descend, and hover. The system was tested in multiple trials, in simulations and outdoor tests, from heights of 10 m to 40 m. Results show that the system is highly reliable and robust to sensor errors, drift, and external disturbance.

  13. Spacecraft Guidance, Navigation, and Control Visualization Tool

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.

  14. Distributed subterranean exploration and mapping with teams of UAVs

    NASA Astrophysics Data System (ADS)

    Rogers, John G.; Sherrill, Ryan E.; Schang, Arthur; Meadows, Shava L.; Cox, Eric P.; Byrne, Brendan; Baran, David G.; Curtis, J. Willard; Brink, Kevin M.

    2017-05-01

    Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.

  15. Tracking, aiming, and hitting the UAV with ordinary assault rifle

    NASA Astrophysics Data System (ADS)

    Racek, František; Baláž, Teodor; Krejčí, Jaroslav; Procházka, Stanislav; Macko, Martin

    2017-10-01

    The usage small-unmanned aerial vehicles (UAVs) is significantly increasing nowadays. They are being used as a carrier of military spy and reconnaissance devices (taking photos, live video streaming and so on), or as a carrier of potentially dangerous cargo (intended for destruction and killing). Both ways of utilizing the UAV cause the necessity to disable it. From the military point of view, to disable the UAV means to bring it down by a weapon of an ordinary soldier that is the assault rifle. This task can be challenging for the soldier because he needs visually detect and identify the target, track the target visually and aim on the target. The final success of the soldier's mission depends not only on the said visual tasks, but also on the properties of the weapon and ammunition. The paper deals with possible methods of prediction of probability of hitting the UAV targets.

  16. Adaptive UAV Attitude Estimation Employing Unscented Kalman Filter, FOAM and Low-Cost MEMS Sensors

    PubMed Central

    de Marina, Héctor García; Espinosa, Felipe; Santos, Carlos

    2012-01-01

    Navigation employing low cost MicroElectroMechanical Systems (MEMS) sensors in Unmanned Aerial Vehicles (UAVs) is an uprising challenge. One important part of this navigation is the right estimation of the attitude angles. Most of the existent algorithms handle the sensor readings in a fixed way, leading to large errors in different mission stages like take-off aerobatic maneuvers. This paper presents an adaptive method to estimate these angles using off-the-shelf components. This paper introduces an Attitude Heading Reference System (AHRS) based on the Unscented Kalman Filter (UKF) using the Fast Optimal Attitude Matrix (FOAM) algorithm as the observation model. The performance of the method is assessed through simulations. Moreover, field experiments are presented using a real fixed-wing UAV. The proposed low cost solution, implemented in a microcontroller, shows a satisfactory real time performance. PMID:23012559

  17. Visualizing Dynamic Weather and Ocean Data in Google Earth

    NASA Astrophysics Data System (ADS)

    Castello, C.; Giencke, P.

    2008-12-01

    Katrina. Climate change. Rising sea levels. Low lake levels. These headliners, and countless others like them, underscore the need to better understand our changing oceans and lakes. Over the past decade, efforts such as the Global Ocean Observing System (GOOS) have added to this understanding, through the creation of interoperable ocean observing systems. These systems, including buoy networks, gliders, UAV's, etc, have resulted in a dramatic increase in the amount of Earth observation data available to the public. Unfortunately, these data tend to be restrictive to mass consumption, owing to large file sizes, incompatible formats, and/or a dearth of user friendly visualization software. Google Earth offers a flexible way to visualize Earth observation data. Marrying high resolution orthoimagery, user friendly query and navigation tools, and the power of OGC's KML standard, Google Earth can make observation data universally understandable and accessible. This presentation will feature examples of meteorological and oceanographic data visualized using KML and Google Earth, along with tools and tips for integrating other such environmental datasets.

  18. A LiDAR and IMU Integrated Indoor Navigation System for UAVs and Its Application in Real-Time Pipeline Classification

    PubMed Central

    Kumar, G. Ajay; Patil, Ashok Kumar; Patil, Rekha; Park, Seong Sill; Chai, Young Ho

    2017-01-01

    Mapping the environment of a vehicle and localizing a vehicle within that unknown environment are complex issues. Although many approaches based on various types of sensory inputs and computational concepts have been successfully utilized for ground robot localization, there is difficulty in localizing an unmanned aerial vehicle (UAV) due to variation in altitude and motion dynamics. This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors. Considering the advantage of the typical geometric structure of indoor environments, the planar position of UAVs can be efficiently calculated from a point-to-point scan matching algorithm using measurements from a horizontally scanning primary LiDAR. The altitude of the UAV with respect to the floor can be estimated accurately using a vertically scanning secondary LiDAR scanner, which is mounted orthogonally to the primary LiDAR. Furthermore, a Kalman filter is used to derive the 3D position by fusing primary and secondary LiDAR data. Additionally, this work presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach. Classification of the pipeline is based on the pipe radius estimation considering the region of interest (ROI) and the typical angle. The ROI is selected by finding the nearest neighbors of the selected seed point in the pipeline point cloud, and the typical angle is estimated with the directional histogram. Experimental results are provided to determine the feasibility of the proposed navigation system and its integration with real-time application in industrial plant engineering. PMID:28574474

  19. a Three-Dimensional Simulation and Visualization System for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Qu, Y.; Cui, T.

    2017-08-01

    Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.

  20. Multi-Mode Estimation for Small Fixed Wing Unmanned Aerial Vehicle Localization Based on a Linear Matrix Inequality Approach

    PubMed Central

    Elzoghby, Mostafa; Li, Fu; Arafa, Ibrahim. I.; Arif, Usman

    2017-01-01

    Information fusion from multiple sensors ensures the accuracy and robustness of a navigation system, especially in the absence of global positioning system (GPS) data which gets degraded in many cases. A way to deal with multi-mode estimation for a small fixed wing unmanned aerial vehicle (UAV) localization framework is proposed, which depends on utilizing a Luenberger observer-based linear matrix inequality (LMI) approach. The proposed estimation technique relies on the interaction between multiple measurement modes and a continuous observer. The state estimation is performed in a switching environment between multiple active sensors to exploit the available information as much as possible, especially in GPS-denied environments. Luenberger observer-based projection is implemented as a continuous observer to optimize the estimation performance. The observer gain might be chosen by solving a Lyapunov equation by means of a LMI algorithm. Convergence is achieved by utilizing the linear matrix inequality (LMI), based on Lyapunov stability which keeps the dynamic estimation error bounded by selecting the observer gain matrix (L). Simulation results are presented for a small UAV fixed wing localization problem. The results obtained using the proposed approach are compared with a single mode Extended Kalman Filter (EKF). Simulation results are presented to demonstrate the viability of the proposed strategy. PMID:28420214

  1. Navigation Constellation Design Using a Multi-Objective Genetic Algorithm

    DTIC Science & Technology

    2015-03-26

    programs. This specific tool not only offers high fidelity simulations, but it also offers the visual aid provided by STK . The ability to...MATLAB and STK . STK is a program that allows users to model, analyze, and visualize space systems. Users can create objects such as satellites and...position dilution of precision (PDOP) and system cost. This thesis utilized Satellite Tool Kit ( STK ) to calculate PDOP values of navigation

  2. Using Unmanned Aerial Vehicle (UAV) for spatio-temporal monitoring of soil erosion and roughness in Chania, Crete, Greece

    NASA Astrophysics Data System (ADS)

    Alexakis, Dimitrios; Seiradakis, Kostas; Tsanis, Ioannis

    2016-04-01

    This article presents a remote sensing approach for spatio-temporal monitoring of both soil erosion and roughness using an Unmanned Aerial Vehicle (UAV). Soil erosion by water is commonly known as one of the main reasons for land degradation. Gully erosion causes considerable soil loss and soil degradation. Furthermore, quantification of soil roughness (irregularities of the soil surface due to soil texture) is important and affects surface storage and infiltration. Soil roughness is one of the most susceptible to variation in time and space characteristics and depends on different parameters such as cultivation practices and soil aggregation. A UAV equipped with a digital camera was employed to monitor soil in terms of erosion and roughness in two different study areas in Chania, Crete, Greece. The UAV followed predicted flight paths computed by the relevant flight planning software. The photogrammetric image processing enabled the development of sophisticated Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimeter level. The DTMs were developed using photogrammetric processing of more than 500 images acquired with the UAV from different heights above the ground level. As the geomorphic formations can be observed from above using UAVs, shadowing effects do not generally occur and the generated point clouds have very homogeneous and high point densities. The DTMs generated from UAV were compared in terms of vertical absolute accuracies with a Global Navigation Satellite System (GNSS) survey. The developed data products were used for quantifying gully erosion and soil roughness in 3D as well as for the analysis of the surrounding areas. The significant elevation changes from multi-temporal UAV elevation data were used for estimating diachronically soil loss and sediment delivery without installing sediment traps. Concerning roughness, statistical indicators of surface elevation point measurements were estimated and various parameters such as standard deviation of DTM, deviation of residual and standard deviation of prominence were calculated directly from the extracted DTM. Sophisticated statistical filters and elevation indices were developed to quantify both soil erosion and roughness. The applied methodology for monitoring both soil erosion and roughness provides an optimum way of reducing the existing gap between field scale and satellite scale. Keywords : UAV, soil, erosion, roughness, DTM

  3. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing.

    PubMed

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-07-17

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system.

  4. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing

    PubMed Central

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-01-01

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system. PMID:26193281

  5. Real-time target tracking and locating system for UAV

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen

    2017-07-01

    In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.

  6. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor

    PubMed Central

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-01-01

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775

  7. Unmanned air vehicle: autonomous takeoff and landing

    NASA Astrophysics Data System (ADS)

    Lim, K. L.; Gitano-Briggs, Horizon Walker

    2010-03-01

    UAVs are increasing in popularity and sophistication due to the demonstrated performance which cannot be attained by manned aircraft1. These developments have been made possible by development of sensors, instrumentation, telemetry and controls during the last few decades. UAVs are now common in areas such as aerial observation and as communication relays3. Most UAVs, however, are still flown by a human pilot via remote control from a ground station. Even the existing autonomous UAVs often require a human pilot to handle the most difficult tasks of take off and landing2 (TOL). This is mainly because the navigation of the airplane requires observation, constant situational assessment and hours of experience from the pilot himself4. Therefore, an autonomous takeoff and landing system (TLS) for UAVs using a few practical design rules with various sensors, instrumentation, etc has been developed. This paper details the design and modeling of the UAV TLS. The model indicates that the UAV's TLS shows promising stability.

  8. Unmanned air vehicle: autonomous takeoff and landing

    NASA Astrophysics Data System (ADS)

    Lim, K. L.; Gitano-Briggs, Horizon Walker

    2009-12-01

    UAVs are increasing in popularity and sophistication due to the demonstrated performance which cannot be attained by manned aircraft1. These developments have been made possible by development of sensors, instrumentation, telemetry and controls during the last few decades. UAVs are now common in areas such as aerial observation and as communication relays3. Most UAVs, however, are still flown by a human pilot via remote control from a ground station. Even the existing autonomous UAVs often require a human pilot to handle the most difficult tasks of take off and landing2 (TOL). This is mainly because the navigation of the airplane requires observation, constant situational assessment and hours of experience from the pilot himself4. Therefore, an autonomous takeoff and landing system (TLS) for UAVs using a few practical design rules with various sensors, instrumentation, etc has been developed. This paper details the design and modeling of the UAV TLS. The model indicates that the UAV's TLS shows promising stability.

  9. Laser- and Multi-Spectral Monitoring of Natural Objects from UAVs

    NASA Astrophysics Data System (ADS)

    Reiterer, Alexander; Frey, Simon; Koch, Barbara; Stemmler, Simon; Weinacker, Holger; Hoffmann, Annemarie; Weiler, Markus; Hergarten, Stefan

    2016-04-01

    The paper describes the research, development and evaluation of a lightweight sensor system for UAVs. The system is composed of three main components: (1) a laser scanning module, (2) a multi-spectral camera system, and (3) a processing/storage unit. All three components are newly developed. Beside measurement precision and frequency, the low weight has been one of the challenging tasks. The current system has a total weight of about 2.5 kg and is designed as a self-contained unit (incl. storage and battery units). The main features of the system are: laser-based multi-echo 3D measurement by a wavelength of 905 nm (totally eye save), measurement range up to 200 m, measurement frequency of 40 kHz, scanning frequency of 16 Hz, relative distance accuracy of 10 mm. The system is equipped with both GNSS and IMU. Alternatively, a multi-visual-odometry system has been integrated to estimate the trajectory of the UAV by image features (based on this system a calculation of 3D-coordinates without GNSS is possible). The integrated multi-spectral camera system is based on conventional CMOS-image-chips equipped with a special sets of band-pass interference filters with a full width half maximum (FWHM) of 50 nm. Good results for calculating the normalized difference vegetation index (NDVI) and the wide dynamic range vegetation index (WDRVI) have been achieved using the band-pass interference filter-set with a FWHM of 50 nm and an exposure times between 5.000 μs and 7.000 μs. The system is currently used for monitoring of natural objects and surfaces, like forest, as well as for geo-risk analysis (landslides). By measuring 3D-geometric and multi-spectral information a reliable monitoring and interpretation of the data-set is possible. The paper gives an overview about the development steps, the system, the evaluation and first results.

  10. UAV visual signature suppression via adaptive materials

    NASA Astrophysics Data System (ADS)

    Barrett, Ron; Melkert, Joris

    2005-05-01

    Visual signature suppression (VSS) methods for several classes of aircraft from WWII on are examined and historically summarized. This study shows that for some classes of uninhabited aerial vehicles (UAVs), primary mission threats do not stem from infrared or radar signatures, but from the amount that an aircraft visually stands out against the sky. The paper shows that such visual mismatch can often jeopardize mission success and/or induce the destruction of the entire aircraft. A psycho-physioptical study was conducted to establish the definition and benchmarks of a Visual Cross Section (VCS) for airborne objects. This study was centered on combining the effects of size, shape, color and luminosity or effective illumance (EI) of a given aircraft to arrive at a VCS. A series of tests were conducted with a 6.6ft (2m) UAV which was fitted with optically adaptive electroluminescent sheets at altitudes of up to 1000 ft (300m). It was shown that with proper tailoring of the color and luminosity, the VCS of the aircraft dropped from more than 4,200cm2 to less than 1.8cm2 at 100m (the observed lower limit of the 20-20 human eye in this study). In laypersons terms this indicated that the UAV essentially "disappeared". This study concludes with an assessment of the weight and volume impact of such a Visual Suppression System (VSS) on the UAV, showing that VCS levels on this class UAV can be suppressed to below 1.8cm2 for aircraft gross weight penalties of only 9.8%.

  11. Research on the attitude of small UAV based on MEMS devices

    NASA Astrophysics Data System (ADS)

    Shi, Xiaojie; Lu, Libin; Jin, Guodong; Tan, Lining

    2017-05-01

    This paper mainly introduces the research principle and implementation method of the small UAV navigation attitude system based on MEMS devices. The Gauss - Newton method based on least squares is used to calibrate the MEMS accelerometer and gyroscope for calibration. Improve the accuracy of the attitude by using the modified complementary filtering to correct the attitude angle error. The experimental data show that the design of the attitude and attitude system in this paper to meet the requirements of small UAV attitude accuracy to achieve a small, low cost.

  12. Near Real Time Structural Health Monitoring with Multiple Sensors in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Todd, M.; Kuester, F.; Goldberg, D.; Lo, E.; Maher, R.

    2017-12-01

    A repeated near real time 3-D digital surrogate representation of critical engineered structures can be used to provide actionable data on subtle time-varying displacements in support of disaster resiliency. We describe a damage monitoring system of optimally-integrated complementary sensors, including Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical Systems (MEMS) accelerometers coupled with the GNSS (seismogeodesy), light multi-rotor Unmanned Aerial Vehicles (UAVs) equipped with high-resolution digital cameras and GNSS/IMU, and ground-based Light Detection and Ranging (LIDAR). The seismogeodetic system provides point measurements of static and dynamic displacements and seismic velocities of the structure. The GNSS ties the UAV and LIDAR imagery to an absolute reference frame with respect to survey stations in the vicinity of the structure to isolate the building response to ground motions. The GNSS/IMU can also estimate the trajectory of the UAV with respect to the absolute reference frame. With these constraints, multiple UAVs and LIDAR images can provide 4-D displacements of thousands of points on the structure. The UAV systematically circumnavigates the target structure, collecting high-resolution image data, while the ground LIDAR scans the structure from different perspectives to create a detailed baseline 3-D reference model. UAV- and LIDAR-based imaging can subsequently be repeated after extreme events, or after long time intervals, to assess before and after conditions. The unique challenge is that disaster environments are often highly dynamic, resulting in rapidly evolving, spatio-temporal data assets with the need for near real time access to the available data and the tools to translate these data into decisions. The seismogeodetic analysis has already been demonstrated in the NASA AIST Managed Cloud Environment (AMCE) designed to manage large NASA Earth Observation data projects on Amazon Web Services (AWS). The Cloud provides distinct advantages in terms of extensive storage and computing resources required for processing UAV and LIDAR imagery. Furthermore, it avoids single points of failure and allows for remote operations during emergencies, when near real time access to structures may be limited.

  13. Automated ortho-rectification of UAV-based hyperspectral data over an agricultural field using frame RGB imagery

    DOE PAGES

    Habib, Ayman; Han, Youkyung; Xiong, Weifeng; ...

    2016-09-24

    Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging ismore » based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Lastly, experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude.« less

  14. Automated ortho-rectification of UAV-based hyperspectral data over an agricultural field using frame RGB imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Ayman; Han, Youkyung; Xiong, Weifeng

    Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging ismore » based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Lastly, experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude.« less

  15. Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping Applications: a Case Study

    NASA Astrophysics Data System (ADS)

    Boon, M. A.; Drijfhout, A. P.; Tesfamichael, S.

    2017-08-01

    The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping of the wetland slope and contour mapping including the representation of hydrological features within the wetland. Factors such as cost, maintenance and flight time is in favour of the Skywalker fixed-wing. The multi-rotor on the other hand is more favourable in terms of data accuracy including for precision environmental planning purposes although the quality of the data of the fixed-wing is satisfactory for most environmental mapping applications.

  16. Assessment of an Onboard EO Sensor to Enable Detect-and-Sense Capability for UAVs Operating in a Cluttered Environment

    DTIC Science & Technology

    2017-09-01

    via visual sensors onboard the UAV. Both the hardware and software architecture design are discussed at length. Then, a series of tests that were...visual sensors onboard the UAV. Both the hardware and software architecture design are discussed at length. Then, a series of tests that were conducted...and representing the change in time . (1) Horn and Schunck (1981) further simplified this equation by taking the Taylor series

  17. [Small unmanned aerial vehicles for low-altitude remote sensing and its application progress in ecology.

    PubMed

    Sun, Zhong Yu; Chen, Yan Qiao; Yang, Long; Tang, Guang Liang; Yuan, Shao Xiong; Lin, Zhi Wen

    2017-02-01

    Low-altitude unmanned aerial vehicles (UAV) remote sensing system overcomes the deficiencies of space and aerial remote sensing system in resolution, revisit period, cloud cover and cost, which provides a novel method for ecological research on mesoscale. This study introduced the composition of UAV remote sensing system, reviewed its applications in species, population, community and ecosystem ecology research. Challenges and opportunities of UAV ecology were identified to direct future research. The promising research area of UAV ecology includes the establishment of species morphology and spectral characteristic data base, species automatic identification, the revelation of relationship between spectral index and plant physiological processes, three-dimension monitoring of ecosystem, and the integration of remote sensing data from multi resources and multi scales. With the development of UAV platform, data transformation and sensors, UAV remote sensing technology will have wide application in ecology research.

  18. Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload.

    PubMed

    Dixon, Stephen R; Wickens, Christopher D

    2006-01-01

    Two experiments were conducted in which participants navigated a simulated unmanned aerial vehicle (UAV) through a series of mission legs while searching for targets and monitoring system parameters. The goal of the study was to highlight the qualitatively different effects of automation false alarms and misses as they relate to operator compliance and reliance, respectively. Background data suggest that automation false alarms cause reduced compliance, whereas misses cause reduced reliance. In two studies, 32 and 24 participants, including some licensed pilots, performed in-lab UAV simulations that presented the visual world and collected dependent measures. Results indicated that with the low-reliability aids, false alarms correlated with poorer performance in the system failure task, whereas misses correlated with poorer performance in the concurrent tasks. Compliance and reliance do appear to be affected by false alarms and misses, respectively, and are relatively independent of each other. Practical implications are that automated aids must be fairly reliable to provide global benefits and that false alarms and misses have qualitatively different effects on performance.

  19. Runway Detection From Map, Video and Aircraft Navigational Data

    DTIC Science & Technology

    2016-03-01

    FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA by Jose R. Espinosa Gloria March 2016 Thesis Advisor: Roberto Cristi Co-Advisor: Oleg...COVERED Master’s thesis 4. TITLE AND SUBTITLE RUNWAY DETECTION FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA 5. FUNDING NUMBERS 6. AUTHOR...Mexican Navy, unmanned aerial vehicles (UAV) have been equipped with daylight and infrared cameras. Processing the video information obtained from these

  20. Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms

    PubMed Central

    Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer

    2014-01-01

    In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877

  1. Positional quality assessment of orthophotos obtained from sensors onboard multi-rotor UAV platforms.

    PubMed

    Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer

    2014-11-26

    In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.

  2. Robust all-source positioning of UAVs based on belief propagation

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Gao, Wenyun; Wang, Jiabo

    2013-12-01

    For unmanned air vehicles (UAVs) to survive hostile operational environments, it is always preferable to utilize all wireless positioning sources available to fuse a robust position. While belief propagation is a well-established method for all source data fusion, it is not an easy job to handle all the mathematics therein. In this work, a comprehensive mathematical framework for belief propagation-based all-source positioning of UAVs is developed, taking wireless sources including Global Navigation Satellite Systems (GNSS) space vehicles, peer UAVs, ground control stations, and signal of opportunities. Based on the mathematical framework, a positioning algorithm named Belief propagation-based Opportunistic Positioning of UAVs (BOPU) is proposed, with an unscented particle filter for Bayesian approximation. The robustness of the proposed BOPU is evaluated by a fictitious scenario that a group of formation flying UAVs encounter GNSS countermeasures en route. Four different configurations of measurements availability are simulated. The results show that the performance of BOPU varies only slightly with different measurements availability.

  3. State estimation for autonomous flight in cluttered environments

    NASA Astrophysics Data System (ADS)

    Langelaan, Jacob Willem

    Safe, autonomous operation in complex, cluttered environments is a critical challenge facing autonomous mobile systems. The research described in this dissertation was motivated by a particularly difficult example of autonomous mobility: flight of a small Unmanned Aerial Vehicle (UAV) through a forest. In cluttered environments (such as forests or natural and urban canyons) signals from navigation beacons such as GPS may frequently be occluded. Direct measurements of vehicle position are therefore unavailable, and information required for flight control, obstacle avoidance, and navigation must be obtained using only on-board sensors. However, payload limitations of small UAVs restrict both the mass and physical dimensions of sensors that can be carried. This dissertation describes the development and proof-of-concept demonstration of a navigation system that uses only a low-cost inertial measurement unit and a monocular camera. Micro electromechanical inertial measurements units are well suited to small UAV applications and provide measurements of acceleration and angular rate. However, they do not provide information about nearby obstacles (needed for collision avoidance) and their noise and bias characteristics lead to unbounded growth in computed position. A monocular camera can provide bearings to nearby obstacles and landmarks. These bearings can be used both to enable obstacle avoidance and to aid navigation. Presented here is a solution to the problem of estimating vehicle state (position, orientation and velocity) as well as positions of obstacles in the environment using only inertial measurements and bearings to obstacles. This is a highly nonlinear estimation problem, and standard estimation techniques such as the Extended Kalman Filter are prone to divergence in this application. In this dissertation a Sigma Point Kalman Filter is implemented, resulting in an estimator which is able to cope with the significant nonlinearities in the system equations and uncertainty in state estimates while remaining tractable for real-time operation. In addition, the issues of data association and landmark initialization are addressed. Estimator performance is examined through Monte Carlo simulations in both two and three dimensions for scenarios involving UAV flight in cluttered environments. Hardware tests and simulations demonstrate navigation through an obstacle-strewn environment by a small Unmanned Ground Vehicle.

  4. Mission control of multiple unmanned aerial vehicles: a workload analysis.

    PubMed

    Dixon, Stephen R; Wickens, Christopher D; Chang, Dervon

    2005-01-01

    With unmanned aerial vehicles (UAVs), 36 licensed pilots flew both single-UAV and dual-UAV simulated military missions. Pilots were required to navigate each UAV through a series of mission legs in one of the following three conditions: a baseline condition, an auditory autoalert condition, and an autopilot condition. Pilots were responsible for (a) mission completion, (b) target search, and (c) systems monitoring. Results revealed that both the autoalert and the autopilot automation improved overall performance by reducing task interference and alleviating workload. The autoalert system benefited performance both in the automated task and mission completion task, whereas the autopilot system benefited performance in the automated task, the mission completion task, and the target search task. Practical implications for the study include the suggestion that reliable automation can help alleviate task interference and reduce workload, thereby allowing pilots to better handle concurrent tasks during single- and multiple-UAV flight control.

  5. Precise visual navigation using multi-stereo vision and landmark matching

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  6. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments

    PubMed Central

    Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori

    2017-01-01

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803

  7. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments.

    PubMed

    Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori

    2017-08-15

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.

  8. Multi-Criteria GIS Analyses with the Use of Uavs for the Needs of Spatial Planning

    NASA Astrophysics Data System (ADS)

    Zawieska, D.; Markiewicz, J.; Turek, A.; Bakuła, K.; Kowalczyk, M.; Kurczyński, Z.; Ostrowski, W.; Podlasiak, P.

    2016-06-01

    Utilization of Unmanned Aerial Systems (UAVs) in agriculture, forestry, or other environmental contexts has recently become common. However, in the case of spatial planning, the role of UAVs still seems to be underestimated. At present, sections of municipal development use UAVs mainly for promotional purposes (films, folders, brochures, etc.). The use of UAVs for spatial management provides results, first of all, in the form of savings in human resources and time; however, more frequently, it is also connected with financial savings (given the decreasing cost of UAVs and photogrammetric software). The performed research presented here relates to the possibilities of using UAVs to update planning documents, and, in particular, to update the study of conditions and directions of spatial management and preparation of local plans for physical management. Based on acquired photographs with a resolution of 3 cm, a cloud of points is generated, as well as 3D models and the true orthophotomap. These data allow multi-criteria spatial analyses. Additionally, directions of development and changes in physical management are analysed for the given area.

  9. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  10. Armed and Dangerous? UAVs and U.S. Security

    DTIC Science & Technology

    2014-01-01

    MEMS] as inertial navigation units [INUs]. This technology is widely used in commercial products, such as toy helicopters and Wii controllers. The...aircraft? In conclusion, both the MTCR and Wassenaar Arrange- ment provide the United States with the flexibility and controls to be able to balance its...security and nonproliferation goals with respect to armed UAVs. Perhaps more problematic is whether the government interagency can strike a balance

  11. An Augmented Virtuality Display for Improving UAV Usability

    DTIC Science & Technology

    2005-01-01

    cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and

  12. Capabilities Assessment and Employment Recommendations for Full Motion Video Optical Navigation Exploitation (FMV-ONE)

    DTIC Science & Technology

    2015-06-01

    GEOINT geospatial intelligence GFC ground force commander GPS global positioning system GUI graphical user interface HA/DR humanitarian...transport stream UAS unmanned aerial system . See UAV. UAV unmanned aerial vehicle. See UAS. VM virtual machine VMU Marine Unmanned Aerial Vehicle... Unmanned Air Systems (UASs). Current programs promise to dramatically increase the number of FMV feeds in the near future. However, there are too

  13. Get-in-the-Zone (GITZ) Transition Display Format for Changing Camera Views in Multi-UAV Operations

    DTIC Science & Technology

    2008-12-01

    the multi-UAV operator will witch between dynamic and static missions, each potentially involving very different scenario environments and task...another. Inspired by cinematography techniques to help audiences maintain spatial understanding of a scene across discrete film cuts, use of a

  14. Introducing a Low-Cost Mini-Uav for - and Multispectral-Imaging

    NASA Astrophysics Data System (ADS)

    Bendig, J.; Bolten, A.; Bareth, G.

    2012-07-01

    The trend to minimize electronic devices also accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices. Consequently, it is not surprising that UAVs are already part of our daily life and the current pace of development will increase civil applications. A well known and already wide spread example is the so called flying video game based on Parrot's AR.Drone which is remotely controlled by an iPod, iPhone, or iPad (http://ardrone.parrot.com). The latter can be considered as a low-weight and low-cost Mini-UAV. In this contribution a Mini-UAV is considered to weigh less than 5 kg and is being able to carry 0.2 kg to 1.5 kg of sensor payload. While up to now Mini-UAVs like Parrot's AR.Drone are mainly equipped with RGB cameras for videotaping or imaging, the development of such carriage systems clearly also goes to multi-sensor platforms like the ones introduced for larger UAVs (5 to 20 kg) by Jaakkolla et al. (2010) for forestry applications or by Berni et al. (2009) for agricultural applications. The problem when designing a Mini-UAV for multi-sensor imaging is the limitation of payload of up to 1.5 kg and a total weight of the whole system below 5 kg. Consequently, the Mini-UAV without sensors but including navigation system and GPS sensors must weigh less than 3.5 kg. A Mini-UAV system with these characteristics is HiSystems' MK-Okto (www.mikrokopter.de). Total weight including battery without sensors is less than 2.5 kg. Payload of a MK-Okto is approx. 1 kg and maximum speed is around 30 km/h. The MK-Okto can be operated up to a wind speed of less than 19 km/h which corresponds to Beaufort scale number 3 for wind speed. In our study, the MK-Okto is equipped with a handheld low-weight NEC F30IS thermal imaging system. The F30IS which was developed for veterinary applications, covers 8 to 13 μm, weighs only 300 g, and is capturing the temperature range between -20 °C and 100 °C. Flying at a height of 100 m, the camera's image covers an area of approx. 50 by 40 m. The sensor's resolution is 160 x 120 pixel and the field of view is 28° (H) x 21° (V). According to the producer, absolute accuracy for temperature is ±1 °C and the thermal sensitivity is >0.1 K. Additionally, the MK-Okto is equipped with Tetracam's Mini MCA. The Mini MCA in our study is a four band multispectral imaging system. Total weight is 700 g and spectral characteristics can be modified by filters between 400 and 1000 nm. In this study, three bands with a width of 10 nm (green: 550 nm, red: 671 nm, NIR1: 800 nm) and one band of 20 nm width (NIR2: 950 nm) have been used. Even so the MK-Okto is able to carry both sensors at the same time, the imaging systems were used separately for this contribution. First results of a combined thermal- and multispectral MK-Okto campaign in 2011 are presented and evaluated for a sugarbeet field experiment examining pathogens and drought stress.

  15. IR radiation characteristics and operating range research for a quad-rotor unmanned aircraft vehicle.

    PubMed

    Gong, Mali; Guo, Rui; He, Sifeng; Wang, Wei

    2016-11-01

    The security threats caused by multi-rotor unmanned aircraft vehicles (UAVs) are serious, especially in public places. To detect and control multi-rotor UAVs, knowledge of IR characteristics is necessary. The IR characteristics of a typical commercial quad-rotor UAV are investigated in this paper through thermal imaging with an IR camera. Combining the 3D geometry and IR images of the UAV, a 3D IR characteristics model is established so that the radiant power from different views can be obtained. An estimation of operating range to detect the UAV is calculated theoretically using signal-to-noise ratio as the criterion. Field experiments are implemented with an uncooled IR camera in an environment temperature of 12°C and a uniform background. For the front view, the operating range is about 150 m, which is close to the simulation result of 170 m.

  16. Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dinc, Ali

    2016-09-01

    In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.

  17. Rapid melting dynamics of an alpine glacier with repeated UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Rossini, Micol; Di Mauro, Biagio; Garzonio, Roberto; Baccolo, Giovanni; Cavallini, Giuseppe; Mattavelli, Matteo; De Amicis, Mattia; Colombo, Roberto

    2018-03-01

    Glacial retreat is a major problem in the Alps, especially over the past 40 years. Unmanned aerial vehicles (UAVs) can provide an unparalleled opportunity to track the spatiotemporal variations in rapidly changing glacial morphological features related to glacial dynamics. The objective of this study is to evaluate the potential of commercial UAV platforms to detect the evolution of the surface topography and morphology of an alpine glacier over a short time scale through the repeated acquisition of high-resolution photogrammetric data. Two high-resolution UAV surveys were performed on the ablation region of the Morteratsch Glacier (Swiss Alps) in July and September 2016. First, structure-from-motion (SfM) techniques were applied to create orthophotos and digital surface models (DSMs) of the glacial surface from multi-view UAV acquisitions. The geometric accuracy of DSMs and orthophotos was checked using differential global navigation satellite system (dGNSS) ground measurements, and an accuracy of approximately 17 cm was achieved for both models. High-resolution orthophotos and DSMs made it possible to provide a detailed characterization of rapidly changing glacial environments. Comparing the data from the first and the second campaigns, the evolution of the lower part of the glacier in response to summer ablation was evaluated. Two distinct processes were revealed and accurately quantified: an average lowering of the surface, with a mean ice thinning of 4 m, and an average horizontal displacement of 3 m due to flowing ice. These data were validated through a comparison of different algorithms and approaches, which clearly showed the consistency of the results. The melt rate spatial patterns were then compared to the glacial brightness and roughness maps derived from the September UAV acquisition. The results showed that the DSM differences describing the glacial melt rates were inversely related to the glacial brightness. In contrast, a positive but weaker relationship existed between the DSM differences and glacial roughness. This research demonstrates that UAV photogrammetry allows the qualitative and quantitative appreciation of the complex evolution of retreating glaciers at a centimetre scale spatial resolution. Such performance allows the detection of seasonal changes in the surface topography, which are related to summer ablation and span from the processes affecting the entire glacier to those that are more local.

  18. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  19. Stereo Correspondence Using Moment Invariants

    NASA Astrophysics Data System (ADS)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  20. Improving Operational Effectiveness of Tactical Long Endurance Unmanned Aerial Systems (TALEUAS) by Utilizing Solar Power

    DTIC Science & Technology

    2014-06-01

    Speed xiii TEK Total Energy Compensated TSP traveling salesman problem UAV unmanned aerial vehicle UDP user datagram protocol UKF unscented...discretized map, and use the map to optimally solve the navigation task. The optimal navigation solution utilizes the well-known “ travelling salesman problem ...2 C. FORMULATION OF THE PROBLEM .................................................. 3 D

  1. Development of a Micro-UAV Hyperspectral Imaging Platform for Assessing Hydrogeological Hazards

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Alabsi, M.

    2015-12-01

    The exacerbating global weather changes have cast significant impacts upon the proportion of water supplied to agriculture. Therefore, one of the 21stCentury Grant Challenges faced by global population is securing water for food. However, the soil-water behavior in an agricultural environment is complex; among others, one of the key properties we recognize is water repellence or hydrophobicity, which affects many hydrogeological and hazardous conditions such as excessive water infiltration, runoff, and soil erosion. Under a US-Israel research program funded by USDA and BARD at Israel, we have proposed the development of a novel micro-unmanned aerial vehicle (micro-UAV or drone) based hyperspectral imaging platform for identifying and assessing soil repellence at low altitudes with enhanced flexibility, much reduced cost, and ultimately easy use. This aerial imaging system consists of a generic micro-UAV, hyperspectral sensor aided by GPS/IMU, on-board computing units, and a ground station. The target benefits of this system include: (1) programmable waypoint navigation and robotic control for multi-view imaging; (2) ability of two- or three-dimensional scene reconstruction for complex terrains; and (3) fusion with other sensors to realize real-time diagnosis (e.g., humidity and solar irradiation that may affect soil-water sensing). In this talk we present our methodology and processes in integration of hyperspectral imaging, on-board sensing and computing, hyperspectral data modeling, and preliminary field demonstration and verification of the developed prototype.

  2. Integrated long-range UAV/UGV collaborative target tracking

    NASA Astrophysics Data System (ADS)

    Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv

    2009-05-01

    Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.

  3. UAV Cooperation Architectures for Persistent Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, R S; Kent, C A; Jones, E D

    2003-03-20

    With the number of small, inexpensive Unmanned Air Vehicles (UAVs) increasing, it is feasible to build multi-UAV sensing networks. In particular, by using UAVs in conjunction with unattended ground sensors, a degree of persistent sensing can be achieved. With proper UAV cooperation algorithms, sensing is maintained even though exceptional events, e.g., the loss of a UAV, have occurred. In this paper a cooperation technique that allows multiple UAVs to perform coordinated, persistent sensing with unattended ground sensors over a wide area is described. The technique automatically adapts the UAV paths so that on the average, the amount of time thatmore » any sensor has to wait for a UAV revisit is minimized. We also describe the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture. This architecture is designed to help simulate and operate distributed sensor networks where multiple UAVs are used to collect data.« less

  4. Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments

    DTIC Science & Technology

    2016-09-01

    yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G

  5. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  6. Configuration and Specifications of AN Unmanned Aerial Vehicle for Precision Agriculture

    NASA Astrophysics Data System (ADS)

    Erena, M.; Montesinos, S.; Portillo, D.; Alvarez, J.; Marin, C.; Fernandez, L.; Henarejos, J. M.; Ruiz, L. A.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) with multispectral sensors are increasingly attractive in geosciences for data capture and map updating at high spatial and temporal resolutions. These autonomously-flying systems can be equipped with different sensors, such as a six-band multispectral camera (Tetracam mini-MCA-6), GPS Ublox M8N, and MEMS gyroscopes, and miniaturized sensor systems for navigation, positioning, and mapping purposes. These systems can be used for data collection in precision viticulture. In this study, the efficiency of a light UAV system for data collection, processing, and map updating in small areas is evaluated, generating correlations between classification maps derived from remote sensing and production maps. Based on the comparison of the indices derived from UAVs incorporating infrared sensors with those obtained by satellites (Sentinel 2A and Landsat 8), UAVs show promise for the characterization of vineyard plots with high spatial variability, despite the low vegetative coverage of these crops. Consequently, a procedure for zoning map production based on UAV/UV images could provide important information for farmers.

  7. From large-eddy simulation to multi-UAVs sampling of shallow cumulus clouds

    NASA Astrophysics Data System (ADS)

    Lamraoui, Fayçal; Roberts, Greg; Burnet, Frédéric

    2016-04-01

    In-situ sampling of clouds that can provide simultaneous measurements at satisfying spatio-temporal resolutions to capture 3D small scale physical processes continues to present challenges. This project (SKYSCANNER) aims at bringing together cloud sampling strategies using a swarm of unmanned aerial vehicles (UAVs) based on Large-eddy simulation (LES). The multi-UAV-based field campaigns with a personalized sampling strategy for individual clouds and cloud fields will significantly improve the understanding of the unresolved cloud physical processes. An extensive set of LES experiments for case studies from ARM-SGP site have been performed using MesoNH model at high resolutions down to 10 m. The carried out simulations led to establishing a macroscopic model that quantifies the interrelationship between micro- and macrophysical properties of shallow convective clouds. Both the geometry and evolution of individual clouds are critical to multi-UAV cloud sampling and path planning. The preliminary findings of the current project reveal several linear relationships that associate many cloud geometric parameters to cloud related meteorological variables. In addition, the horizontal wind speed indicates a proportional impact on cloud number concentration as well as triggering and prolonging the occurrence of cumulus clouds. In the framework of the joint collaboration that involves a Multidisciplinary Team (including institutes specializing in aviation, robotics and atmospheric science), this model will be a reference point for multi-UAVs sampling strategies and path planning.

  8. Educational Process Navigator as Means of Creation of Individual Educational Path of a Student

    ERIC Educational Resources Information Center

    Khuziakhmetov, Anvar N.; Sytina, Nadezhda S.

    2016-01-01

    Rationale of the problem stated in the article is caused by search for new alternative models for individual educational paths of students in the continuous multi-level education system on the basis of the navigators of the educational process, being a visual matrix of individual educational space. The purpose of the article is to develop the…

  9. A proposed UAV for indoor patient care.

    PubMed

    Todd, Catherine; Watfa, Mohamed; El Mouden, Yassine; Sahir, Sana; Ali, Afrah; Niavarani, Ali; Lutfi, Aoun; Copiaco, Abigail; Agarwal, Vaibhavi; Afsari, Kiyan; Johnathon, Chris; Okafor, Onyeka; Ayad, Marina

    2015-09-10

    Indoor flight, obstacle avoidance and client-server communication of an Unmanned Aerial Vehicle (UAV) raises several unique research challenges. This paper examines current methods and associated technologies adapted within the literature toward autonomous UAV flight, for consideration in a proposed system for indoor healthcare administration with a quadcopter. We introduce Healthbuddy, a unique research initiative towards overcoming challenges associated with indoor navigation, collision detection and avoidance, stability, wireless drone-server communications and automated decision support for patient care in a GPS-denied environment. To address the identified research deficits, a drone-based solution is presented. The solution is preliminary as we develop and refine the suggested algorithms and hardware system to achieve the research objectives.

  10. Modeling and simulation of dynamic ant colony's labor division for task allocation of UAV swarm

    NASA Astrophysics Data System (ADS)

    Wu, Husheng; Li, Hao; Xiao, Renbin; Liu, Jie

    2018-02-01

    The problem of unmanned aerial vehicle (UAV) task allocation not only has the intrinsic attribute of complexity, such as highly nonlinear, dynamic, highly adversarial and multi-modal, but also has a better practicability in various multi-agent systems, which makes it more and more attractive recently. In this paper, based on the classic fixed response threshold model (FRTM), under the idea of "problem centered + evolutionary solution" and by a bottom-up way, the new dynamic environmental stimulus, response threshold and transition probability are designed, and a dynamic ant colony's labor division (DACLD) model is proposed. DACLD allows a swarm of agents with a relatively low-level of intelligence to perform complex tasks, and has the characteristic of distributed framework, multi-tasks with execution order, multi-state, adaptive response threshold and multi-individual response. With the proposed model, numerical simulations are performed to illustrate the effectiveness of the distributed task allocation scheme in two situations of UAV swarm combat (dynamic task allocation with a certain number of enemy targets and task re-allocation due to unexpected threats). Results show that our model can get both the heterogeneous UAVs' real-time positions and states at the same time, and has high degree of self-organization, flexibility and real-time response to dynamic environments.

  11. Vision-based sensing for autonomous in-flight refueling

    NASA Astrophysics Data System (ADS)

    Scott, D.; Toal, M.; Dale, J.

    2007-04-01

    A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.

  12. Feasibility Study for an Autonomous UAV -Magnetometer System -- Final Report on SERDP SEED 1509:2206

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roelof Versteeg; Mark McKay; Matt Anderson

    2007-09-01

    Large areas across the United States are potentially contaminated with UXO, with some ranges encompassing tens to hundreds of thousands of acres. Technologies are needed which will allow for cost effective wide area scanning with 1) near 100 % coverage and 2) near 100 % detection of subsurface ordnance or features indicative of subsurface ordnance. The current approach to wide area scanning is a multi-level one, in which medium altitude fixed wing optical imaging is used for an initial site assessment. This assessment is followed with low altitude manned helicopter based magnetometry followed by surface investigations using either towed geophysicalmore » sensor arrays or man portable sensors. In order to be effective for small UXO detection, the sensing altitude for magnetic site investigations needs to be on the order of 1 – 3 meters. These altitude requirements means that manned helicopter surveys will generally only be feasible in large, open and relatively flat terrains. While such surveys are effective in mapping large areas relatively fast there are substantial mobilization/demobilization, staffing and equipment costs associated with these surveys (resulting in costs of approximately $100-$150/acre). Surface towed arrays provide high resolution maps but have other limitations, e.g. in their ability to navigate rough terrain effectively. Thus, other systems are needed allowing for effective data collection. An UAV (Unmanned Aerial Vehicle) magnetometer platform is an obvious alternative. The motivation behind such a system is that it would be safer for the operators, cheaper in initial and O&M costs, and more effective in terms of site characterization. However, while UAV data acquisition from fixed wing platforms for large (> 200 feet) stand off distances is relatively straight forward, a host of challenges exist for low stand-off distance (~ 6 feet) UAV geophysical data acquisition. The objective of SERDP SEED 1509:2006 was to identify the primary challenges associated with a low stand off distance autonomous UAV magnetometer platform and to investigate whether these challenges can be resolved successfully such that a successful UAV magnetometer platform can be constructed. The primary challenges which were identified and investigated include: 1. The feasibility of assembling a payload package which integrates magnetometers, accurate positioning systems (DGPS, height above ground measurement), obstacle avoidance systems, power infrastructure, communications and data storage as well as auxiliary flight controls 2. The availability of commercial UAV platforms with autonomous flight capability which can accommodate this payload package 3. The feasibility of integrating obstacle avoidance controls in UAV platform control 4. The feasibility of collecting high quality magnetic data in the vicinity of an UAV.« less

  13. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands

    Treesearch

    Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela

    2017-01-01

    High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...

  14. The use of unmanned aerial vehicle imagery in intertidal monitoring

    NASA Astrophysics Data System (ADS)

    Konar, Brenda; Iken, Katrin

    2018-01-01

    Intertidal monitoring projects are often limited in their practicality because traditional methods such as visual surveys or removal of biota are often limited in the spatial extent for which data can be collected. Here, we used imagery from a small unmanned aerial vehicle (sUAV) to test their potential use in rocky intertidal and intertidal seagrass surveys in the northern Gulf of Alaska. Images captured by the sUAV in the high, mid and low intertidal strata on a rocky beach and within a seagrass bed were compared to data derived concurrently from observer visual surveys and to images taken by observers on the ground. Observer visual data always resulted in the highest taxon richness, but when observer data were aggregated to the lower taxonomic resolution obtained by the sUAV images, overall community composition was mostly similar between the two methods. Ground camera images and sUAV images yielded mostly comparable community composition despite the typically higher taxonomic resolution obtained by the ground camera. We conclude that monitoring goals or research questions that can be answered on a relatively coarse taxonomic level can benefit from an sUAV-based approach because it allows much larger spatial coverage within the time constraints of a low tide interval than is possible by observers on the ground. We demonstrated this large-scale applicability by using sUAV images to develop maps that show the distribution patterns and patchiness of seagrass.

  15. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes

    PubMed Central

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung

    2016-01-01

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156

  16. Medium Altitude Endurance Unmanned Air Vehicle

    NASA Astrophysics Data System (ADS)

    Ernst, Larry L.

    1994-10-01

    The medium altitude endurance unmanned air vehicle (MAE UAV) program (formerly the tactical endurance TE UAV) is a new effort initiated by the Department of Defense to develop a ground launched UAV that can fly out 500 miles, remain on station for 24 hours, and return. It will transmit high resolution optical, infrared, and synthetic aperture radar (SAR) images of well-defended target areas through satellite links. It will provide near-real-time, releasable, low cost/low risk surveillance, targeting and damage assessment complementary to that of satellites and manned aircraft. The paper describes specific objectives of the MAE UAV program (deliverables and schedule) and the program's unique position as one of several programs to streamline the acquisition process under the cognizance of the newly established Airborne Reconnaissance Office. I discuss the system requirements and operational concept and describe the technical capabilities and characteristics of the major subsystems (airframe, propulsion, navigation, sensors, communication links, ground station, etc.) in some detail.

  17. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    PubMed

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  18. On-board computational efficiency in real time UAV embedded terrain reconstruction

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis

    2014-05-01

    In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in computational efficiency on board and in optimized time constraints.

  19. Real-time single-frequency GPS/MEMS-IMU attitude determination of lightweight UAVs.

    PubMed

    Eling, Christian; Klingbeil, Lasse; Kuhlmann, Heiner

    2015-10-16

    In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5°) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05° for the roll and the pitch angle and 0.2° for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases.

  20. The application of micro UAV in construction project

    NASA Astrophysics Data System (ADS)

    Kaamin, Masiri; Razali, Siti Nooraiin Mohd; Ahmad, Nor Farah Atiqah; Bukari, Saifullizan Mohd; Ngadiman, Norhayati; Kadir, Aslila Abd; Hamid, Nor Baizura

    2017-10-01

    In every outstanding construction project, there is definitely have an effective construction management. Construction management allows a construction project to be implemented according to plan. Every construction project must have a progress development works that is usually created by the site engineer. Documenting the progress of works is one of the requirements in construction management. In a progress report it is necessarily have a visual image as an evidence. The conventional method used for photographing on the construction site is by using common digital camera which is has few setback comparing to Micro Unmanned Aerial Vehicles (UAV). Besides, site engineer always have a current issues involving limitation of monitoring on high reach point and entire view of the construction site. The purpose of this paper is to provide a concise review of Micro UAV technology in monitoring the progress on construction site through visualization approach. The aims of this study are to replace the conventional method of photographing on construction site using Micro UAV which can portray the whole view of the building, especially on high reach point and allows to produce better images, videos and 3D model and also facilitating site engineer to monitor works in progress. The Micro UAV was flown around the building construction according to the Ground Control Points (GCPs) to capture images and record videos. The images taken from Micro UAV have been processed generate 3D model and were analysed to visualize the building construction as well as monitoring the construction progress work and provides immediate reliable data for project estimation. It has been proven that by using Micro UAV, a better images and videos can give a better overview of the construction site and monitor any defects on high reach point building structures. Not to be forgotten, with Micro UAV the construction site progress is more efficiently tracked and kept on the schedule.

  1. INSIGHT: RFID and Bluetooth enabled automated space for the blind and visually impaired.

    PubMed

    Ganz, Aura; Gandhi, Siddhesh Rajan; Wilson, Carole; Mullett, Gary

    2010-01-01

    In this paper we introduce INSIGHT, an indoor location tracking and navigation system to help the blind and visually impaired to easily navigate to their chosen destination in a public building. INSIGHT makes use of RFID and Bluetooth technology deployed within the building to locate and track the users. The PDA based user device interacts with INSIGHT server and provides the user navigation instructions in an audio form. The proposed system provides multi-resolution localization of the users, facilitating the provision of accurate navigation instructions when the user is in the vicinity of the RFID tags as well as accommodating a PANIC button which provides navigation instructions when the user is anywhere in the building. Moreover, the system will continuously monitor the zone in which the user walks. This will enable the system to identify if the user is located in the wrong zone of the building which may not lead to the desired destination.

  2. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    PubMed Central

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  3. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    PubMed

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  4. a Fast Approach for Stitching of Aerial Images

    NASA Astrophysics Data System (ADS)

    Moussa, A.; El-Sheimy, N.

    2016-06-01

    The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image's coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.

  5. Multiagent pursuit-evasion games: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Kim, Hyounjin

    Deployment of intelligent agents has been made possible through advances in control software, microprocessors, sensor/actuator technology, communication technology, and artificial intelligence. Intelligent agents now play important roles in many applications where human operation is too dangerous or inefficient. There is little doubt that the world of the future will be filled with intelligent robotic agents employed to autonomously perform tasks, or embedded in systems all around us, extending our capabilities to perceive, reason and act, and replacing human efforts. There are numerous real-world applications in which a single autonomous agent is not suitable and multiple agents are required. However, after years of active research in multi-agent systems, current technology is still far from achieving many of these real-world applications. Here, we consider the problem of deploying a team of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) to pursue a second team of UGV evaders while concurrently building a map in an unknown environment. This pursuit-evasion game encompasses many of the challenging issues that arise in operations using intelligent multi-agent systems. We cast the problem in a probabilistic game theoretic framework and consider two computationally feasible pursuit policies: greedy and global-max. We also formulate this probabilistic pursuit-evasion game as a partially observable Markov decision process and employ a policy search algorithm to obtain a good pursuit policy from a restricted class of policies. The estimated value of this policy is guaranteed to be uniformly close to the optimal value in the given policy class under mild conditions. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent yet allows for coordinated team efforts. We then describe our implementation on a fleet of UGVs and UAVs, detailing components such as high level pursuit policy computation, inter-agent communication, navigation, sensing, and regulation. We present both simulation and experimental results on real pursuit-evasion games between our fleet of UAVs and UGVs and evaluate the pursuit policies, relating expected capture times to the speed and intelligence of the evaders and the sensing capabilities of the pursuers. The architecture and algorithmsis described in this dissertation are general enough to be applied to many real-world applications.

  6. UAV formation control design with obstacle avoidance in dynamic three-dimensional environment.

    PubMed

    Chang, Kai; Xia, Yuanqing; Huang, Kaoli

    2016-01-01

    This paper considers the artificial potential field method combined with rotational vectors for a general problem of multi-unmanned aerial vehicle (UAV) systems tracking a moving target in dynamic three-dimensional environment. An attractive potential field is generated between the leader and the target. It drives the leader to track the target based on the relative position of them. The other UAVs in the formation are controlled to follow the leader by the attractive control force. The repulsive force affects among the UAVs to avoid collisions and distribute the UAVs evenly on the spherical surface whose center is the leader-UAV. Specific orders or positions of the UAVs are not required. The trajectories of avoidance obstacle can be obtained through two kinds of potential field with rotation vectors. Every UAV can choose the optimal trajectory to avoid the obstacle and reconfigure the formation after passing the obstacle. Simulations study on UAV are presented to demonstrate the effectiveness of proposed method.

  7. On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.

    2018-05-01

    Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.

  8. Validation of Inertial and Optical Navigation Techniques for Space Applications with UAVS

    NASA Astrophysics Data System (ADS)

    Montaño, J.; Wis, M.; Pulido, J. A.; Latorre, A.; Molina, P.; Fernández, E.; Angelats, E.; Colomina, I.

    2015-09-01

    PERIGEO is an R&D project, funded by the INNPRONTA 2011-2014 programme from Spanish CDTI, which aims to investigate the use of UAV technologies and processes for the validation of space oriented technologies. For this purpose, among different space missions and technologies, a set of activities for absolute and relative navigation are being carried out to deal with the attitude and position estimation problem from a temporal image sequence from a camera on the visible spectrum and/or Light Detection and Ranging (LIDAR) sensor. The process is covered entirely: from sensor measurements and data acquisition (images, LiDAR ranges and angles), data pre-processing (calibration and co-registration of camera and LIDAR data), features and landmarks extraction from the images and image/LiDAR-based state estimation. In addition to image processing area, classical navigation system based on inertial sensors is also included in the research. The reason of combining both approaches is to enable the possibility to keep navigation capability in environments or missions where the radio beacon or reference signal as the GNSS satellite is not available (as for example an atmospheric flight in Titan). The rationale behind the combination of those systems is that they complement each other. The INS is capable of providing accurate position, velocity and full attitude estimations at high data rates. However, they need an absolute reference observation to compensate the time accumulative errors caused by inertial sensor inaccuracies. On the other hand, imaging observables can provide absolute and relative positioning and attitude estimations. However they need that the sensor head is pointing toward ground (something that may not be possible if the carrying platform is maneuvering) to provide accurate estimations and they are not capable of provide some hundreds of Hz that can deliver an INS. This mutual complementarity has been observed in PERIGEO and because of this they are combined into one system. The inertial navigation system implemented in PERIGEO is based on a classical loosely coupled INS/GNSS approach that is very similar to the implementation of the INS/Imaging navigation system that is mentioned above. The activities envisaged in PERIGEO cover the algorithms development and validation and technology testing on UAVs under representative conditions. Past activities have covered the design and development of the algorithms and systems. This paper presents the most recent activities and results on the area of image processing for robust estimation within PERIGEO, which are related with the hardware platforms definition (including sensors) and its integration in UAVs. Results for the tests performed during the flight campaigns in representative outdoor environments will be also presented (at the time of the full paper submission the tests will be performed), as well as analyzed, together with a roadmap definition for future developments.

  9. Visualization of Air Particle Dynamics in an Engine Inertial Particle Separator

    NASA Astrophysics Data System (ADS)

    Wolf, Jason; Zhang, Wei

    2015-11-01

    Unmanned Aerial Vehicles (UAVs) are regularly deployed around the world in support of military, civilian and humanitarian efforts. Due to their unique mission profiles, these advanced UAVs utilize various internal combustion engines, which consume large quantities of air. Operating these UAVs in areas with high concentrations of sand and dust can be hazardous to the engines, especially during takeoff and landing. In such events, engine intake filters quickly become saturated and clogged with dust particles, causing a substantial decrease in the UAVs' engine performance and service life. Development of an Engine Air Particle Separator (EAPS) with high particle separation efficiency is necessary for maintaining satisfactory performance of the UAVs. Inertial Particle Separators (IPS) have been one common effective method but they experience complex internal particle-laden flows that are challenging to understand and model. This research employs an IPS test rig to simulate dust particle separation under different flow conditions. Soda lime glass spheres with a mean diameter of 35-45 microns are used in experiments as a surrogate for airborne particulates encountered during flight. We will present measurements of turbulent flow and particle dynamics using flow visualization techniques to understand the multiphase fluid dynamics in the IPS device. This knowledge can contribute to design better performing IPS systems for UAVs. Cleveland State University, Cleveland, Ohio, 44115.

  10. Consensus-based distributed estimation in multi-agent systems with time delay

    NASA Astrophysics Data System (ADS)

    Abdelmawgoud, Ahmed

    During the last years, research in the field of cooperative control of swarm of robots, especially Unmanned Aerial Vehicles (UAV); have been improved due to the increase of UAV applications. The ability to track targets using UAVs has a wide range of applications not only civilian but also military as well. For civilian applications, UAVs can perform tasks including, but not limited to: map an unknown area, weather forecasting, land survey, and search and rescue missions. On the other hand, for military personnel, UAV can track and locate a variety of objects, including the movement of enemy vehicles. Consensus problems arise in a number of applications including coordination of UAVs, information processing in wireless sensor networks, and distributed multi-agent optimization. We consider a widely studied consensus algorithms for processing sensed data by different sensors in wireless sensor networks of dynamic agents. Every agent involved in the network forms a weighted average of its own estimated value of some state with the values received from its neighboring agents. We introduced a novelty of consensus-based distributed estimation algorithms. We propose a new algorithm to reach a consensus given time delay constraints. The proposed algorithm performance was observed in a scenario where a swarm of UAVs measuring the location of a ground maneuvering target. We assume that each UAV computes its state prediction and shares it with its neighbors only. However, the shared information applied to different agents with variant time delays. The entire group of UAVs must reach a consensus on target state. Different scenarios were also simulated to examine the effectiveness and performance in terms of overall estimation error, disagreement between delayed and non-delayed agents, and time to reach a consensus for each parameter contributing on the proposed algorithm.

  11. Possibilities of Use of UAVS for Technical Inspection of Buildings and Constructions

    NASA Astrophysics Data System (ADS)

    Banaszek, Anna; Banaszek, Sebastian; Cellmer, Anna

    2017-12-01

    In recent years, Unmanned Aerial Vehicles (UAVs) have been used in various sectors of the economy. This is due to the development of new technologies for acquiring and processing geospatial data. The paper presents the results of experiments using UAV, equipped with a high resolution digital camera, for a visual assessment of the technical condition of the building roof and for the inventory of energy infrastructure and its surroundings. The usefulness of digital images obtained from the UAV deck is presented in concrete examples. The use of UAV offers new opportunities in the area of technical inspection due to the detail and accuracy of the data, low operating costs and fast data acquisition.

  12. Real-Time Single-Frequency GPS/MEMS-IMU Attitude Determination of Lightweight UAVs

    PubMed Central

    Eling, Christian; Klingbeil, Lasse; Kuhlmann, Heiner

    2015-01-01

    In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5∘) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05∘ for the roll and the pitch angle and 0.2∘ for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases. PMID:26501281

  13. Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.

    2015-12-01

    There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.

  14. Pushbroom Stereo for High-Speed Navigation in Cluttered Environments

    DTIC Science & Technology

    2014-09-01

    inertial measurement sensors such as Achtelik et al .’s implemention of PTAM (parallel tracking and mapping) [15] with a barometric altimeter, stable flights...in indoor and outdoor environments are possible [1]. With a full vison- aided inertial navigation system (VINS), Li et al . have shown remarkable...avoidance on small UAVs. Stereo systems suffer from a similar speed issue, with most modern systems running at or below 30 Hz [8], [27]. Honegger et

  15. Prototyping a GNSS-Based Passive Radar for UAVs: An Instrument to Classify the Water Content Feature of Lands

    PubMed Central

    Troglia Gamba, Micaela; Marucco, Gianluca; Pini, Marco; Ugazio, Sabrina; Falletti, Emanuela; Lo Presti, Letizia

    2015-01-01

    Global Navigation Satellite Systems (GNSS) broadcast signals for positioning and navigation, which can be also employed for remote sensing applications. Indeed, the satellites of any GNSS can be seen as synchronized sources of electromagnetic radiation, and specific processing of the signals reflected back from the ground can be used to estimate the geophysical properties of the Earth’s surface. Several experiments have successfully demonstrated GNSS-reflectometry (GNSS-R), whereas new applications are continuously emerging and are presently under development, either from static or dynamic platforms. GNSS-R can be implemented at a low cost, primarily if small devices are mounted on-board unmanned aerial vehicles (UAVs), which today can be equipped with several types of sensors for environmental monitoring. So far, many instruments for GNSS-R have followed the GNSS bistatic radar architecture and consisted of custom GNSS receivers, often requiring a personal computer and bulky systems to store large amounts of data. This paper presents the development of a GNSS-based sensor for UAVs and small manned aircraft, used to classify lands according to their soil water content. The paper provides details on the design of the major hardware and software components, as well as the description of the results obtained through field tests. PMID:26569242

  16. Prototyping a GNSS-Based Passive Radar for UAVs: An Instrument to Classify the Water Content Feature of Lands.

    PubMed

    Gamba, Micaela Troglia; Marucco, Gianluca; Pini, Marco; Ugazio, Sabrina; Falletti, Emanuela; Lo Presti, Letizia

    2015-11-10

    Global Navigation Satellite Systems (GNSS) broadcast signals for positioning and navigation, which can be also employed for remote sensing applications. Indeed, the satellites of any GNSS can be seen as synchronized sources of electromagnetic radiation, and specific processing of the signals reflected back from the ground can be used to estimate the geophysical properties of the Earth's surface. Several experiments have successfully demonstrated GNSS-reflectometry (GNSS-R), whereas new applications are continuously emerging and are presently under development, either from static or dynamic platforms. GNSS-R can be implemented at a low cost, primarily if small devices are mounted on-board unmanned aerial vehicles (UAVs), which today can be equipped with several types of sensors for environmental monitoring. So far, many instruments for GNSS-R have followed the GNSS bistatic radar architecture and consisted of custom GNSS receivers, often requiring a personal computer and bulky systems to store large amounts of data. This paper presents the development of a GNSS-based sensor for UAVs and small manned aircraft, used to classify lands according to their soil water content. The paper provides details on the design of the major hardware and software components, as well as the description of the results obtained through field tests.

  17. A Multi-Purpose Simulation Environment for UAV Research

    DTIC Science & Technology

    2003-05-01

    Maximum 200 Words) Unmanned aerial vehicles (UAVs) are playing an important role in today’s military initiatives. UAVs have proven to be invaluable in...battlefield commanders. Integration of new technologies necessitates simulation prior to fielding new systems in order to avoid costly er- rors. The unique...collection ofinformation if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD

  18. Budget Uav Systems for the Prospection of - and Medium-Scale Archaeological Sites

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Hanus, K.

    2016-06-01

    One of the popular uses of UAVs in photogrammetry is providing an archaeological documentation. A wide offer of low-cost (consumer) grade UAVs, as well as the popularity of user-friendly photogrammetric software allowing obtaining satisfying results, contribute to facilitating the process of preparing documentation for small archaeological sites. However, using solutions of this kind is much more problematic for larger areas. The limited possibilities of autonomous flight makes it significantly harder to obtain data for areas too large to be covered during a single mission. Moreover, sometimes the platforms used are not equipped with telemetry systems, which makes navigating and guaranteeing a similar quality of data during separate flights difficult. The simplest solution is using a better UAV, however the cost of devices of such type often exceeds the financial capabilities of archaeological expeditions. The aim of this article is to present methodology allowing obtaining data for medium scale areas using only a basic UAV. The proposed methodology assumes using a simple multirotor, not equipped with any flight planning system or telemetry. Navigating of the platform is based solely on live-view images sent from the camera attached to the UAV. The presented survey was carried out using a simple GoPro camera which, from the perspective of photogrammetric use, was not the optimal configuration due to the fish eye geometry of the camera. Another limitation is the actual operational range of UAVs which in the case of cheaper systems, rarely exceeds 1 kilometre and is in fact often much smaller. Therefore the surveyed area must be divided into sub-blocks which correspond to the range of the drone. It is inconvenient since the blocks must overlap, so that they will later be merged during their processing. This increases the length of required flights as well as the computing power necessary to process a greater number of images. These issues make prospection highly inconvenient, but not impossible. Our paper presents our experiences through two case studies: surveys conducted in Nepal under the aegis of UNESCO, and works carried out as a part of a Polish archaeological expedition in Cyprus, which both prove that the proposed methodology allows obtaining satisfying results. The article is an important voice in the ongoing debate between commercial and academic archaeologists who discuss the balance between the required standards of conducting archaeological works and economic capabilities of archaeological missions.

  19. Path planning and Ground Control Station simulator for UAV

    NASA Astrophysics Data System (ADS)

    Ajami, A.; Balmat, J.; Gauthier, J.-P.; Maillot, T.

    In this paper we present a Universal and Interoperable Ground Control Station (UIGCS) simulator for fixed and rotary wing Unmanned Aerial Vehicles (UAVs), and all types of payloads. One of the major constraints is to operate and manage multiple legacy and future UAVs, taking into account the compliance with NATO Combined/Joint Services Operational Environment (STANAG 4586). Another purpose of the station is to assign the UAV a certain degree of autonomy, via autonomous planification/replanification strategies. The paper is organized as follows. In Section 2, we describe the non-linear models of the fixed and rotary wing UAVs that we use in the simulator. In Section 3, we describe the simulator architecture, which is based upon interacting modules programmed independently. This simulator is linked with an open source flight simulator, to simulate the video flow and the moving target in 3D. To conclude this part, we tackle briefly the problem of the Matlab/Simulink software connection (used to model the UAV's dynamic) with the simulation of the virtual environment. Section 5 deals with the control module of a flight path of the UAV. The control system is divided into four distinct hierarchical layers: flight path, navigation controller, autopilot and flight control surfaces controller. In the Section 6, we focus on the trajectory planification/replanification question for fixed wing UAV. Indeed, one of the goals of this work is to increase the autonomy of the UAV. We propose two types of algorithms, based upon 1) the methods of the tangent and 2) an original Lyapunov-type method. These algorithms allow either to join a fixed pattern or to track a moving target. Finally, Section 7 presents simulation results obtained on our simulator, concerning a rather complicated scenario of mission.

  20. Aerial images visual localization on a vector map using color-texture segmentation

    NASA Astrophysics Data System (ADS)

    Kunina, I. A.; Teplyakov, L. M.; Gladkov, A. P.; Khanipov, T. M.; Nikolaev, D. P.

    2018-04-01

    In this paper we study the problem of combining UAV obtained optical data and a coastal vector map in absence of satellite navigation data. The method is based on presenting the territory as a set of segments produced by color-texture image segmentation. We then find such geometric transform which gives the best match between these segments and land and water areas of the georeferenced vector map. We calculate transform consisting of an arbitrary shift relatively to the vector map and bound rotation and scaling. These parameters are estimated using the RANSAC algorithm which matches the segments contours and the contours of land and water areas of the vector map. To implement this matching we suggest computing shape descriptors robust to rotation and scaling. We performed numerical experiments demonstrating the practical applicability of the proposed method.

  1. Evaluation of low- and medium-cost IMUs for airborne gravimetry with UAVs

    NASA Astrophysics Data System (ADS)

    Deurloo, R. A.; Bastos, M. L.; Geng, Y.; Yan, W.

    2011-12-01

    The use of Unmanned Aerial Vehicles (UAVs) has increased in a large number of fields and is proving to be a good alternative to aerial surveys with traditional (manned) aircraft. In the scope of the PITVANT (Projecto de Investigação e Tecnologia em Veículos Aéreos Não-Tripulados) project, a research project funded by the Portuguese Ministry of Defence that aims at the development and demonstration of tools and technologies for UAVs, the Astronomical Observatory of the Faculty of Sciences of the University of Porto is investigating the use of UAVs for regional airborne gravimetry. The goal is to implement a so-called strapdown gravimetry system, based on the integrated use of GNSS and a low- to medium-cost IMU (Inertial Measurement Unit) that can be setup on board the UAVs developed within PITVANT. Two basic approaches exist in strapdown GNSS/IMU gravimetry: - to compute gravity disturbances directly from the combination of GNSS derived accelerations with accelerations measured by the IMU (the accelerometry approach); - to estimate the gravity disturbances as part of an inertial navigation solution using an (extended) Kalman filter (the inertial navigation approach). Because of the limitation of low- to medium-cost inertial systems the latter approach was used here. This method has proven to be effective in previous studies with this type of GNSS/IMU systems. To define the final system architecture, the performance of several different inertial systems was recently tested during an airborne survey with a regular aircraft, i.e. a CASA C212 from the Portuguese Air Force (PAF). Among the systems on board were a medium-cost Litton LN-200 and a low-cost Crossbow AHRS440, combined with a single GNSS receiver. Different Kalman filter configurations and GNSS processing options were investigated for each of the systems. The main goal was to assess the limits of the integrated GNSS/IMU systems to sense the gravity field (scalar gravimetry) and to evaluate their use and effectiveness in UAVs. The results of this analysis are presented here.

  2. Multi-temporal UAV based data for mapping crop type and structure in smallholder dominated Tanzanian agricultural landscape

    NASA Astrophysics Data System (ADS)

    Nagol, J. R.; Chung, C.; Dempewolf, J.; Maurice, S.; Mbungu, W.; Tumbo, S.

    2015-12-01

    Timely mapping and monitoring of crops like Maize, an important food security crop in Tanzania, can facilitate timely response by government and non-government organizations to food shortage or surplus conditions. Small UAVs can play an important role in linking the spaceborne remote sensing data and ground based measurement to improve the calibration and validation of satellite based estimates of in-season crop metrics. In Tanzania most of the growing season is often obscured by clouds. UAV data, if collected within a stratified statistical sampling framework, can also be used to directly in lieu of spaceborne data to infer mid-season yield estimates at regional scales.Here we present an object based approach to estimate crop metrics like crop type, area, and height using multi-temporal UAV based imagery. The methods were tested at three 1km2 plots in Kilosa, Njombe, and Same districts in Tanzania. At these sites both ground based and UAV based data were collected on a monthly time-step during the year 2015 growing season. SenseFly eBee drone with RGB and NIR-R-G camera was used to collect data. Crop type classification accuracies of above 85% were easily achieved.

  3. Bridge Crack Detection Using Multi-Rotary Uav and Object-Base Image Analysis

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Hsiao, K. W.; Jhan, J. P.; Wang, S. H.; Fang, W. C.; Wang, J. L.

    2017-08-01

    Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2-8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA) technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM) to obtain 3D crack information and based on image scale we can calculate the width of a crack object. For spalling volume calculation, we also apply SGM to obtain dense surface geometry. Assuming the background is a planar surface, we can fit a planar function and convert the surface geometry into a DSM. Thus, for spalling area its height will be lower than the plane and its value will be negative. We can thus apply several image processing technique to segment the spalling area and calculate the spalling volume as well. For bridge inspection and UAV image management within a laboratory, we develop a graphic user interface. The major functions include crack auto-detection using OBIA, crack editing, i.e. delete and add cracks, crack attributing, 3D crack visualization, spalling area/volume calculation, bridge defects documentation, etc.

  4. On the prospects of cross-calibrating the Cherenkov Telescope Array with an airborne calibration platform

    NASA Astrophysics Data System (ADS)

    Brown, Anthony M.

    2018-01-01

    Recent advances in unmanned aerial vehicle (UAV) technology have made UAVs an attractive possibility as an airborne calibration platform for astronomical facilities. This is especially true for arrays of telescopes spread over a large area such as the Cherenkov Telescope Array (CTA). In this paper, the feasibility of using UAVs to calibrate CTA is investigated. Assuming a UAV at 1km altitude above CTA, operating on astronomically clear nights with stratified, low atmospheric dust content, appropriate thermal protection for the calibration light source and an onboard photodiode to monitor its absolute light intensity, inter-calibration of CTA's telescopes of the same size class is found to be achievable with a 6 - 8 % uncertainty. For cross-calibration of different telescope size classes, a systematic uncertainty of 8 - 10 % is found to be achievable. Importantly, equipping the UAV with a multi-wavelength calibration light source affords us the ability to monitor the wavelength-dependent degradation of CTA telescopes' optical system, allowing us to not only maintain this 6 - 10 % uncertainty after the first few years of telescope deployment, but also to accurately account for the effect of multi-wavelength degradation on the cross-calibration of CTA by other techniques, namely with images of air showers and local muons. A UAV-based system thus provides CTA with several independent and complementary methods of cross-calibrating the optical throughput of individual telescopes. Furthermore, housing environmental sensors on the UAV system allows us to not only minimise the systematic uncertainty associated with the atmospheric transmission of the calibration signal, it also allows us to map the dust content above CTA as well as monitor the temperature, humidity and pressure profiles of the first kilometre of atmosphere above CTA with each UAV flight.

  5. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    PubMed

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  6. Multi-Sensor Fusion and Enhancement for Object Detection

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur

    2005-01-01

    This was a quick &week effort to investigate the ability to detect changes along the flight path of an unmanned airborne vehicle (UAV) over time. Video was acquired by the UAV during several passes over the same terrain. Concurrently, GPS data and UAV attitude data were also acquired. The purpose of the research was to use information from all of these sources to detect if any change had occurred in the terrain encompassed by the flight path.

  7. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  8. Cybersecurity for aerospace autonomous systems

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.

  9. The optimal design of UAV wing structure

    NASA Astrophysics Data System (ADS)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  10. Multi-Objective Algorithm for Blood Supply via Unmanned Aerial Vehicles to the Wounded in an Emergency Situation.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K L

    2016-01-01

    Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood's temperature model during transportation, the UAVs' scheduling and routes' planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood's temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance.

  11. GPU-based multi-volume ray casting within VTK for medical applications.

    PubMed

    Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-03-01

    Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.

  12. GaN-based THz advanced quantum cascade lasers for manned and unmanned systems

    NASA Astrophysics Data System (ADS)

    Anwar, A. F. M.; Manzur, Tariq; Lefebvre, Kevin R.; Carapezza, Edward M.

    2009-09-01

    In recent years the use of Unmanned Autonomous Vehicles (UAV) has seen a wider range of applications. However, their applications are restricted due to (a) advanced integrated sensing and processing electronics and (b) limited energy storage or on-board energy generation to name a few. The availability of a wide variety of sensing elements, operating at room temperatures, provides a great degree of flexibility with an extended application domain. Though sensors responding to a variable spectrum of input excitations ranging from (a) chemical, (b) biological, (c) atmospheric, (d) magnetic and (e) visual/IR imaging have been implemented in UAVs, the use of THz as a technology has not been implemented due to the absence of systems operating at room temperature. The integration of multi-phenomenological onboard sensors on small and miniature unmanned air vehicles will dramatically impact the detection and processing of challenging targets, such as humans carrying weapons or wearing suicide bomb vests. Unmanned air vehicles have the potential of flying over crowds of people and quickly discriminating non-threat humans from treat humans. The state of the art in small and miniature UAV's has progressed to vehicles of less than 1 pound in weight but with payloads of only a fraction of a pound. Uncooled IR sensors, such as amorphous silicon and vanadium oxide microbolometers with MRT's of less than 70mK and requiring power of less than 250mW, are available for integration into small UAV's. These sensors are responsive only up to approximately 14 microns and do not favorably compare with THz imaging systems for remotely detecting and classifying concealed weapons and bombs. In the following we propose the use of THz GaN-based QCL operating at room temperature as a possible alternative.

  13. Pricise Target Geolocation and Tracking Based on Uav Video Imagery

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.

  14. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    PubMed Central

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  15. Towards "DRONE-BORNE" Disaster Management: Future Application Scenarios

    NASA Astrophysics Data System (ADS)

    Tanzi, Tullio Joseph; Chandra, Madhu; Isnard, Jean; Camara, Daniel; Sebastien, Olivier; Harivelo, Fanilo

    2016-06-01

    Information plays a key role in crisis management and relief efforts for natural disaster scenarios. Given their flight properties, UAVs (Unmanned Aerial Vehicles) provide new and interesting perspectives on the data gathering for disaster management. A new generation of UAVs may help to improve situational awareness and information assessment. Among the advantages UAVs may bring to the disaster management field, we can highlight the gain in terms of time and human resources, as they can free rescue teams from time-consuming data collection tasks and assist research operations with more insightful and precise guidance thanks to advanced sensing capabilities. However, in order to be useful, UAVs need to overcome two main challenges. The first one is to achieve a sufficient autonomy level, both in terms of navigation and interpretation of the data sensed. The second major challenge relates to the reliability of the UAV, with respect to accidental (safety) or malicious (security) risks. This paper first discusses the potential of UAV in assisting in different humanitarian relief scenarios, as well as possible issues in such situations. Based on recent experiments, we discuss the inherent advantages of autonomous flight operations, both lone flights and formation flights. The question of autonomy is then addressed and a secure embedded architecture and its specific hardware capabilities is sketched out. We finally present a typical use case based on the new detection and observation abilities that UAVs can bring to rescue teams. Although this approach still has limits that have to be addressed, technically speaking as well as operationally speaking, it seems to be a very promising one to enhance disaster management efforts activities.

  16. a Light-Weight Laser Scanner for Uav Applications

    NASA Astrophysics Data System (ADS)

    Tommaselli, A. M. G.; Torres, F. M.

    2016-06-01

    Unmanned Aerial Vehicles (UAV) have been recognized as a tool for geospatial data acquisition due to their flexibility and favourable cost benefit ratio. The practical use of laser scanning devices on-board UAVs is also developing with new experimental and commercial systems. This paper describes a light-weight laser scanning system composed of an IbeoLux scanner, an Inertial Navigation System Span-IGM-S1, from Novatel, a Raspberry PI portable computer, which records data from both systems and an octopter UAV. The performance of this light-weight system was assessed both for accuracy and with respect to point density, using Ground Control Points (GCP) as reference. Two flights were performed with the UAV octopter carrying the equipment. In the first trial, the flight height was 100 m with six strips over a parking area. The second trial was carried out over an urban park with some buildings and artificial targets serving as reference Ground Control Points. In this experiment a flight height of 70 m was chosen to improve target response. Accuracy was assessed based on control points the coordinates of which were measured in the field. Results showed that vertical accuracy with this prototype is around 30 cm, which is acceptable for forest applications but this accuracy can be improved using further refinements in direct georeferencing and in the system calibration.

  17. Wetland Assessment Using Unmanned Aerial Vehicle (uav) Photogrammetry

    NASA Astrophysics Data System (ADS)

    Boon, M. A.; Greenfield, R.; Tesfamichael, S.

    2016-06-01

    The use of Unmanned Arial Vehicle (UAV) photogrammetry is a valuable tool to enhance our understanding of wetlands. Accurate planning derived from this technological advancement allows for more effective management and conservation of wetland areas. This paper presents results of a study that aimed at investigating the use of UAV photogrammetry as a tool to enhance the assessment of wetland ecosystems. The UAV images were collected during a single flight within 2½ hours over a 100 ha area at the Kameelzynkraal farm, Gauteng Province, South Africa. An AKS Y-6 MKII multi-rotor UAV and a digital camera on a motion compensated gimbal mount were utilised for the survey. Twenty ground control points (GCPs) were surveyed using a Trimble GPS to achieve geometrical precision and georeferencing accuracy. Structure-from-Motion (SfM) computer vision techniques were used to derive ultra-high resolution point clouds, orthophotos and 3D models from the multi-view photos. The geometric accuracy of the data based on the 20 GCP's were 0.018 m for the overall, 0.0025 m for the vertical root mean squared error (RMSE) and an over all root mean square reprojection error of 0.18 pixel. The UAV products were then edited and subsequently analysed, interpreted and key attributes extracted using a selection of tools/ software applications to enhance the wetland assessment. The results exceeded our expectations and provided a valuable and accurate enhancement to the wetland delineation, classification and health assessment which even with detailed field studies would have been difficult to achieve.

  18. Which Modality Is Best for Presenting Navigation Instructions?

    DTIC Science & Technology

    2013-08-07

    example, getting directions about where to find merchandise in a large multi-story department store (e.g., Macy’s or Harrods). In other, more serious...Casali, 2008) the visual modality can be used instead, with pilots reading the messages as written commands. A third ∗ Corresponding author at...5032; fax: +1 303 492 8895. E-mail addresses: alice.healy@colorado.edu, ahealy@psych.colorado.edu (A.F. Healy). possibility also involves visual

  19. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time

    PubMed Central

    Avellar, Gustavo S. C.; Pereira, Guilherme A. S.; Pimenta, Luciano C. A.; Iscold, Paulo

    2015-01-01

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem’s (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles’ maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs. PMID:26540055

  20. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time.

    PubMed

    Avellar, Gustavo S C; Pereira, Guilherme A S; Pimenta, Luciano C A; Iscold, Paulo

    2015-11-02

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem's (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles' maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs.

  1. Multi-sensor field trials for detection and tracking of multiple small unmanned aerial vehicles flying at low altitude

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Hengy, Sebastien; Hommes, Alexander; Kloeppel, Frank; Shoykhetbrod, Alex; Geibig, Thomas; Johannes, Winfried; Naz, Pierre; Christnacher, Frank

    2017-05-01

    Small unmanned aerial vehicles (UAV) flying at low altitude are becoming more and more a serious threat in civilian and military scenarios. In recent past, numerous incidents have been reported where small UAV were flying in security areas leading to serious danger to public safety or privacy. The detection and tracking of small UAV is a widely discussed topic. Especially, small UAV flying at low altitude in urban environment or near background structures and the detection of multiple UAV at the same time is challenging. Field trials were carried out to investigate the detection and tracking of multiple UAV flying at low altitude with state of the art detection technologies. Here, we present results which were achieved using a heterogeneous sensor network consisting of acoustic antennas, small frequency modulated continuous wave (FMCW) RADAR systems and optical sensors. While acoustics, RADAR and LiDAR were applied to monitor a wide azimuthal area (360°) and to simultaneously track multiple UAV, optical sensors were used for sequential identification with a very narrow field of view.

  2. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    NASA Astrophysics Data System (ADS)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  3. Construction and Testing of Broadband High Impedance Ground Planes (HIGPS) for Surface Mount Antennas

    DTIC Science & Technology

    2008-03-01

    Conductor PMC: Perfect Magnetic Conductor RF: Radio Frequency RH: Right-handed SNG : Single Negative TACAN: Tactical Air Navigation UAV: Unmanned Aerial...negative ( SNG ) and double-negative (DNG) materials, and their fascinating properties have driven the interest in MTMs (Engheta and Ziolkowski, 2006

  4. An UAV scheduling and planning method for post-disaster survey

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhou, X. G.; Yin, J.; Xiao, Q. Y.

    2014-11-01

    Annually, the extreme climate and special geological environments lead to frequent natural disasters, e.g., earthquakes, floods, etc. The disasters often bring serious casualties and enormous economic losses. Post-disaster surveying is very important for disaster relief and assessment. As the Unmanned Aerial Vehicle (UAV) remote sensing with the advantage of high efficiency, high precision, high flexibility, and low cost, it is widely used in emergency surveying in recent years. As the UAVs used in emergency surveying cannot stop and wait for the happening of the disaster, when the disaster happens the UAVs usually are working at everywhere. In order to improve the emergency surveying efficiency, it is needed to track the UAVs and assign the emergency surveying task for each selected UAV. Therefore, a UAV tracking and scheduling method for post-disaster survey is presented in this paper. In this method, Global Positioning System (GPS), and GSM network are used to track the UAVs; an emergency tracking UAV information database is built in advance by registration, the database at least includes the following information, e.g., the ID of the UAVs, the communication number of the UAVs; when catastrophe happens, the real time location of all UAVs in the database will be gotten using emergency tracking method at first, then the traffic cost time for all UAVs to the disaster region will be calculated based on the UAVs' the real time location and the road network using the nearest services analysis algorithm; the disaster region is subdivided to several emergency surveying regions based on DEM, area, and the population distribution map; the emergency surveying regions are assigned to the appropriated UAV according to shortest cost time rule. The UAVs tracking and scheduling prototype is implemented using SQLServer2008, ArcEnginge 10.1 SDK, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.

  5. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  6. Using Multiattribute Utility Copulas in Support of UAV Search and Destroy Operations

    DTIC Science & Technology

    2012-03-01

    1, ..., n. (2.3) where ai = a(1 − li) and bi = 1 − ai = ali + b. This implies the same mathe- matical properties of a strictly increasing cumulative...and DTMC defined target movement. Abdelhafiz et al. [6] present several instances of the multi-objective UAV mis- sion planning problem where the

  7. Assessing very high resolution UAV imagery for monitoring forest health during a simulated disease outbreak

    NASA Astrophysics Data System (ADS)

    Dash, Jonathan P.; Watt, Michael S.; Pearse, Grant D.; Heaphy, Marie; Dungey, Heidi S.

    2017-09-01

    Research into remote sensing tools for monitoring physiological stress caused by biotic and abiotic factors is critical for maintaining healthy and highly-productive plantation forests. Significant research has focussed on assessing forest health using remotely sensed data from satellites and manned aircraft. Unmanned aerial vehicles (UAVs) may provide new tools for improved forest health monitoring by providing data with very high temporal and spatial resolutions. These platforms also pose unique challenges and methods for health assessments must be validated before use. In this research, we simulated a disease outbreak in mature Pinus radiata D. Don trees using targeted application of herbicide. The objective was to acquire a time-series simulated disease expression dataset to develop methods for monitoring physiological stress from a UAV platform. Time-series multi-spectral imagery was acquired using a UAV flown over a trial at regular intervals. Traditional field-based health assessments of crown health (density) and needle health (discolouration) were carried out simultaneously by experienced forest health experts. Our results showed that multi-spectral imagery collected from a UAV is useful for identifying physiological stress in mature plantation trees even during the early stages of tree stress. We found that physiological stress could be detected earliest in data from the red edge and near infra-red bands. In contrast to previous findings, red edge data did not offer earlier detection of physiological stress than the near infra-red data. A non-parametric approach was used to model physiological stress based on spectral indices and was found to provide good classification accuracy (weighted kappa = 0.694). This model can be used to map physiological stress based on high-resolution multi-spectral data.

  8. High-Fidelity Computational Aerodynamics of Multi-Rotor Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Ventura Diaz, Patricia; Yoon, Seokkwan

    2018-01-01

    High-fidelity Computational Fluid Dynamics (CFD) simulations have been carried out for several multi-rotor Unmanned Aerial Vehicles (UAVs). Three vehicles have been studied: the classic quadcopter DJI Phantom 3, an unconventional quadcopter specialized for forward flight, the SUI Endurance, and an innovative concept for Urban Air Mobility (UAM), the Elytron 4S UAV. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, and a hybrid turbulence model. The DJI Phantom 3 is simulated with different rotors and with both a simplified airframe and the real airframe including landing gear and a camera. The effects of weather are studied for the DJI Phantom 3 quadcopter in hover. The SUI En- durance original design is compared in forward flight to a new configuration conceived by the authors, the hybrid configuration, which gives a large improvement in forward thrust. The Elytron 4S UAV is simulated in helicopter mode and in airplane mode. Understanding the complex flows in multi-rotor vehicles will help design quieter, safer, and more efficient future drones and UAM vehicles.

  9. A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs.

    PubMed

    Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E; Hernandez, Carmen; Dalmau, Oscar

    2017-06-16

    The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow.

  10. A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs

    PubMed Central

    Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E.; Hernandez, Carmen; Dalmau, Oscar

    2017-01-01

    The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow. PMID:28621740

  11. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    PubMed

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  12. Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations

    USGS Publications Warehouse

    Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.

    2017-01-01

    Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.

  13. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  14. Multi-Agent Task Negotiation Among UAVs to Defend Against Swarm Attacks

    DTIC Science & Technology

    2012-03-01

    are based on economic models [39]. Auction methods of task coordination also attempt to deal with agents dealing with noisy, dynamic environments...August 2006. [34] M. Alighanbari, “ Robust and decentralized task assignment algorithms for uavs,” Ph.D. dissertation, Massachusetts Institute of Technology...Implicit Coordination . . . . . . . . . . . . . 12 2.4 Decentralized Algorithm B - Market- Based . . . . . . . . . . . . . . . . 12 2.5 Decentralized

  15. Technical Report: Unmanned Helicopter Solution for Survey-Grade Lidar and Hyperspectral Mapping

    NASA Astrophysics Data System (ADS)

    Kaňuk, Ján; Gallay, Michal; Eck, Christoph; Zgraggen, Carlo; Dvorný, Eduard

    2018-05-01

    Recent development of light-weight unmanned airborne vehicles (UAV) and miniaturization of sensors provide new possibilities for remote sensing and high-resolution mapping. Mini-UAV platforms are emerging, but powerful UAV platforms of higher payload capacity are required to carry the sensors for survey-grade mapping. In this paper, we demonstrate a technological solution and application of two different payloads for highly accurate and detailed mapping. The unmanned airborne system (UAS) comprises a Scout B1-100 autonomously operating UAV helicopter powered by a gasoline two-stroke engine with maximum take-off weight of 75 kg. The UAV allows for integrating of up to 18 kg of a customized payload. Our technological solution comprises two types of payload completely independent of the platform. The first payload contains a VUX-1 laser scanner (Riegl, Austria) and a Sony A6000 E-Mount photo camera. The second payload integrates a hyperspectral push-broom scanner AISA Kestrel 10 (Specim, Finland). The two payloads need to be alternated if mapping with both is required. Both payloads include an inertial navigation system xNAV550 (Oxford Technical Solutions Ltd., United Kingdom), a separate data link, and a power supply unit. Such a constellation allowed for achieving high accuracy of the flight line post-processing in two test missions. The standard deviation was 0.02 m (XY) and 0.025 m (Z), respectively. The intended application of the UAS was for high-resolution mapping and monitoring of landscape dynamics (landslides, erosion, flooding, or crops growth). The legal regulations for such UAV applications in Switzerland and Slovakia are also discussed.

  16. Design and application of BIM based digital sand table for construction management

    NASA Astrophysics Data System (ADS)

    Fuquan, JI; Jianqiang, LI; Weijia, LIU

    2018-05-01

    This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.

  17. Using probabilistic model as feature descriptor on a smartphone device for autonomous navigation of unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Desai, Alok; Lee, Dah-Jye

    2013-12-01

    There has been significant research on the development of feature descriptors in the past few years. Most of them do not emphasize real-time applications. This paper presents the development of an affine invariant feature descriptor for low resource applications such as UAV and UGV that are equipped with an embedded system with a small microprocessor, a field programmable gate array (FPGA), or a smart phone device. UAV and UGV have proven suitable for many promising applications such as unknown environment exploration, search and rescue operations. These applications required on board image processing for obstacle detection, avoidance and navigation. All these real-time vision applications require a camera to grab images and match features using a feature descriptor. A good feature descriptor will uniquely describe a feature point thus allowing it to be correctly identified and matched with its corresponding feature point in another image. A few feature description algorithms are available for a resource limited system. They either require too much of the device's resource or too much simplification on the algorithm, which results in reduction in performance. This research is aimed at meeting the needs of these systems without sacrificing accuracy. This paper introduces a new feature descriptor called PRObabilistic model (PRO) for UGV navigation applications. It is a compact and efficient binary descriptor that is hardware-friendly and easy for implementation.

  18. Use of the RoboFlag synthetic task environment to investigate workload and stress responses in UAV operation.

    PubMed

    Guznov, Svyatoslav; Matthews, Gerald; Funke, Gregory; Dukes, Allen

    2011-09-01

    Use of unmanned aerial vehicles (UAVs) is an increasingly important element of military missions. However, controlling UAVs may impose high stress and workload on the operator. This study evaluated the use of the RoboFlag simulated environment as a means for profiling multiple dimensions of stress and workload response to a task requiring control of multiple vehicles (robots). It tested the effects of two workload manipulations, environmental uncertainty (i.e., UAV's visual view area) and maneuverability, in 64 participants. The findings confirmed that the task produced substantial workload and elevated distress. Dissociations between the stress and performance effects of the manipulations confirmed the utility of a multivariate approach to assessment. Contrary to expectations, distress and some aspects of workload were highest in the low-uncertainty condition, suggesting that overload of information may be an issue for UAV interface designers. The strengths and limitations of RoboFlag as a methodology for investigating stress and workload responses are discussed.

  19. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  20. Computational analysis of unmanned aerial vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    Abudarag, Sakhr; Yagoub, Rashid; Elfatih, Hassan; Filipovic, Zoran

    2017-01-01

    A computational analysis has been performed to verify the aerodynamics properties of Unmanned Aerial Vehicle (UAV). The UAV-SUST has been designed and fabricated at the Department of Aeronautical Engineering at Sudan University of Science and Technology in order to meet the specifications required for surveillance and reconnaissance mission. It is classified as a medium range and medium endurance UAV. A commercial CFD solver is used to simulate steady and unsteady aerodynamics characteristics of the entire UAV. In addition to Lift Coefficient (CL), Drag Coefficient (CD), Pitching Moment Coefficient (CM) and Yawing Moment Coefficient (CN), the pressure and velocity contours are illustrated. The aerodynamics parameters are represented a very good agreement with the design consideration at angle of attack ranging from zero to 26 degrees. Moreover, the visualization of the velocity field and static pressure contours is indicated a satisfactory agreement with the proposed design. The turbulence is predicted by enhancing K-ω SST turbulence model within the computational fluid dynamics code.

  1. Design and implementation of atmospheric multi-parameter sensor for UAVs

    NASA Astrophysics Data System (ADS)

    Yu, F.; Zhao, Y.; Chen, G.; Liu, Y.; Han, Y.

    2017-12-01

    With the rapid development of industry and the increase of cars in developing countries, air pollutants have caused a series of environmental issues such as haze and smog. However, air pollution is a process of surface-to-air mass exchange, and various kinds of atmospheric factors have close association with aerosol concentration, such as temperature, humidity, etc. Vertical distributions of aerosol in the region provide an important clue to reveal the exchange mechanism in the atmosphere between atmospheric boundary layer and troposphere. Among the various kinds of flying platforms, unmanned aerial vehicles (UAVs) shows more advantages in vertical measurement of aerosol owned to its flexibility and low cost. However, only few sensors could be mounted on the UAVs because of the limited size and power requirement. Here, a light-weight, low-power atmospheric multi-parameter sensor (AMPS) is proposed and could be mounted on several kinds of UAV platforms. The AMPS integrates multi-sensors, which are the laser aerosol particle sensor, the temperature probe, the humidity probe and the pressure probe, in order to simultaneously sample the vertical distribution characters of aerosol particle concentration, temperature, relative humidity and atmospheric pressure. The data from the sensors are synchronized by a proposed communication mechanism based on GPS. Several kinds of housing are designed to accommodate the different payload requirements of UAVs in size and weight. The experiments were carried out with AMPS mounted on three kinds of flying platforms. The results shows that the power consumption is less than 1.3 W, with relatively high accuracy in temperature (±0.1°C), relative humidity (±0.8%RH), PM2.5 (<20%) and PM10 (<20%). Vertical profiles of PM2.5 and PM10 concentrations were observed simultaneously by the AMPS three times every day in five days. The results revealed the significant correlation between the aerosol particle concentration and atmospheric parameters. With low cost and flexibility, AMPS for UAVs provides an effective way to explore the properties of aerosol vertical distribution, and to monitor air pollutants flexibly.

  2. UAV telemetry communications using ZigBee protocol

    NASA Astrophysics Data System (ADS)

    Nasution, T. H.; Siregar, I.; Yasir, M.

    2017-10-01

    Wireless communication has been widely used in various fields or disciplines such as agriculture, health, engineering, military, and aerospace so as to support the work in that field. The communication technology is typically used for controlling devices and data monitoring. One development of wireless communication is the widely used telemetry system used to reach areas that cannot be reached by humans using UAV (Unmanned Aerial Vehicle) or unmanned aircraft. In this paper we discuss the design of telemetry system in UAV using ZigBee protocol. From the test obtained the system can work well with visualization displays without pause is 20 data per second with a maximum data length of 120 characters.

  3. Atmospheric radiation measurement unmanned aerospace vehicle (ARM-UAV) program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolton, W.R.

    1996-11-01

    ARM-UAV is part of the multi-agency U.S. Global Change Research Program and is addressing the largest source of uncertainty in predicting climatic response: the interaction of clouds and the sun`s energy in the Earth`s atmosphere. An important aspect of the program is the use of unmanned aerospace vehicles (UAVs) as the primary airborne platform. The ARM-UAV Program has completed two major flight series: The first series conducted in April, 1994, using an existing UAV (the General Atomics Gnat 750) consisted of eight highly successful flights at the DOE climate site in Oklahoma. The second series conducted in September/October, 1995, usingmore » two piloted aircraft (Egrett and Twin Otter), featured simultaneous measurements above and below clouds and in clear sky. Additional flight series are planned to continue study of the cloudy and clear sky energy budget in the Spring and Fall of 1996 over the DOE climate site in Oklahoma. 3 refs., 4 figs., 1 tab.« less

  4. Correlated-Data Fusion and Cooperative Aiding in GNSS-Stressed or Denied Environments

    NASA Astrophysics Data System (ADS)

    Mokhtarzadeh, Hamid

    A growing number of applications require continuous and reliable estimates of position, velocity, and orientation. Price requirements alone disqualify most traditional navigation or tactical-grade sensors and thus navigation systems based on automotive or consumer-grade sensors aided by Global Navigation Satellite Systems (GNSS), like the Global Positioning System (GPS), have gained popularity. The heavy reliance on GPS in these navigation systems is a point of concern and has created interest in alternative or back-up navigation systems to enable robust navigation through GPS-denied or stressed environments. This work takes advantage of current trends for increased sensing capabilities coupled with multilayer connectivity to propose a cooperative navigation-based aiding system as a means to limit dead reckoning error growth in the absence of absolute measurements like GPS. Each vehicle carries a dead reckoning navigation system which is aided by relative measurements, like range, to neighboring vehicles together with information sharing. Detailed architectures and concepts of operation are described for three specific applications: commercial aviation, Unmanned Aerial Vehicles (UAVs), and automotive applications. Both centralized and decentralized implementations of cooperative navigation-based aiding systems are described. The centralized system is based on a single Extended Kalman Filter (EKF). A decentralized implementation suited for applications with very limited communication bandwidth is discussed in detail. The presence of unknown correlation between the a priori state and measurement errors makes the standard Kalman filter unsuitable. Two existing estimators for handling this unknown correlation are Covariance Intersection (CI) and Bounded Covariance Inflation (BCInf) filters. A CI-based decentralized estimator suitable for decentralized cooperative navigation implementation is proposed. A unified derivation is presented for the Kalman filter, CI filter, and BCInf filter measurement update equations. Furthermore, characteristics important to the proper implementation of CI and BCInf in practice are discussed. A new covariance normalization step is proposed as necessary to properly apply CI or BCInf. Lastly, both centralized and decentralized implementations of cooperative aiding are analyzed and evaluated using experimental data in the three applications. In the commercial aviation study aircraft are simulated to use their Automatic Dependent Surveillance - Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS) systems to cooperatively aid their on board INS during a 60 min GPS outage in the national airspace. An availability study of cooperative navigation as proposed in this work around representative United States airports is performed. Availabilities between 70-100% were common at major airports like LGA and MSP in a 30 nmi radius around the airport during morning to evening hours. A GPS-denied navigation system for small UAVs based on cooperative information sharing is described. Experimentally collected flight data from 7 small UAV flights are played-back to evaluate the performance of the navigation system. The results show that the most effective of the architectures can lead to 5+ minutes of navigation without GPS maintaining position errors less than 200 m (1-sigma). The automotive case study considers 15 minutes of automotive traffic (2,000 + vehicles) driving through a half-mile stretch of highway without access to GPS. Automotive radar coupled with Dedicated Short Range Communication (DSRC) protocol are used to implement cooperative aiding to a low-cost 2-D INS on board each vehicle. The centralized system achieves an order of magnitude reduction in uncertainty by aggressively aiding the INS on board each vehicle. The proposed CI-based decentralized estimator is demonstrated to be conservative and maintain consistency. A quantitative analysis of bandwidth requirements shows that the proposed decentralized estimator falls comfortably within modern connectivity capabilities. A naive implementation of the high-performance centralized estimator is also achievable, but it was demonstrated to be burdensome, nearing the bandwidth limits.

  5. Supervisory Control of Unmanned Vehicles

    DTIC Science & Technology

    2010-04-01

    than-ideal video quality (Chen et al., 2007; Chen and Thropp, 2007). Simpson et al. (2004) proposed using a spatial audio display to augment UAV...operator’s SA and discussed its utility for each of the three SA levels. They recommended that both visual and spatial audio information should be...presented concurrently. They also suggested that presenting the audio information spatially may enhance UAV operator’s sense of presence (i.e

  6. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  7. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    PubMed Central

    Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu

    2017-01-01

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254

  8. Multi-Agent Cooperative Target Search

    PubMed Central

    Hu, Jinwen; Xie, Lihua; Xu, Jun; Xu, Zhao

    2014-01-01

    This paper addresses a vision-based cooperative search for multiple mobile ground targets by a group of unmanned aerial vehicles (UAVs) with limited sensing and communication capabilities. The airborne camera on each UAV has a limited field of view and its target discriminability varies as a function of altitude. First, by dividing the whole surveillance region into cells, a probability map can be formed for each UAV indicating the probability of target existence within each cell. Then, we propose a distributed probability map updating model which includes the fusion of measurement information, information sharing among neighboring agents, information decay and transmission due to environmental changes such as the target movement. Furthermore, we formulate the target search problem as a multi-agent cooperative coverage control problem by optimizing the collective coverage area and the detection performance. The proposed map updating model and the cooperative control scheme are distributed, i.e., assuming that each agent only communicates with its neighbors within its communication range. Finally, the effectiveness of the proposed algorithms is illustrated by simulation. PMID:24865884

  9. Mini-Uav LIDAR for Power Line Inspection

    NASA Astrophysics Data System (ADS)

    Teng, G. E.; Zhou, M.; Li, C. R.; Wu, H. H.; Li, W.; Meng, F. R.; Zhou, C. C.; Ma, L.

    2017-09-01

    Light detection and ranging (LIDAR) system based on unmanned aerial vehicles (UAVs) recently are in rapid advancement, meanwhile portable and flexible mini-UAV-borne laser scanners have been a hot research field, especially for the complex terrain survey in the mountains and other areas. This study proposes a power line inspection system solution based on mini-UAV-borne LIDAR system-AOEagle, developed by Academy of Opto-Electronics, Chinese Academy of Sciences, which mounted on a Multi-rotor unmanned aerial vehicle for complex terrain survey according to real test. Furthermore, the point cloud data was explored to validate its applicability for power line inspection, in terms of corridor and line laser point clouds; deformation detection of power towers, etc. The feasibility and advantages of AOEagle have been demonstrated by the promising results based on the real-measured data in the field of power line inspection.

  10. Development of an Unmanned Aircraft System and Cyberinfrastructure for Environmental Science Research

    NASA Astrophysics Data System (ADS)

    Brady, J. J.; Tweedie, C. E.; Escapita, I. J.

    2009-12-01

    There is a fundamental need to improve capacities for monitoring environmental change using remote sensing technologies. Recently, researchers have begun using Unmanned Aerial Vehicles (UAVs) to expand and improve upon remote sensing capabilities. Limitations to most non-military and relatively small-scale Unmanned Aircraft Systems (UASs) include a need to develop more reliable communications between ground and aircraft, tools to optimize flight control, real time data processing, and visually ascertaining the quantity of data collected while in air. Here we present a prototype software system that has enhanced communication between ground and the vehicle, can synthesize near real time data acquired from sensors on board, can log operation data during flights, and can visually demonstrate the amount and quality of data for a sampling area. This software has the capacity to greatly improve the utilization of UAS in the environmental sciences. The software system is being designed for use on a paraglider UAV that has a suite of sensors suitable for characterizing the footprints of eddy covariance towers situated in the Chihuahuan Desert and in the Arctic. Sensors on board relay operational flight data (airspeed, ground speed, latitude, longitude, pitch, yaw, roll, acceleration, and video) as well as a suite of customized sensors. Additional sensors can be added to an on board laptop or a CR1000 data logger thereby allowing data from these sensors to be visualized in the prototype software. This poster will describe the development, use and customization of our UAS and multimedia will be available during AGU to illustrate the system in use. UAV on workbench in the lab UAV in flight

  11. Towards a New Architecture for Autonomous Data Collection

    NASA Astrophysics Data System (ADS)

    Tanzi, T. J.; Roudier, Y.; Apvrille, L.

    2015-08-01

    A new generation of UAVs is coming that will help improve the situational awareness and assessment necessary to ensure quality data collection, especially in difficult conditions like natural disasters. Operators should be relieved from time-consuming data collection tasks as much as possible and at the same time, UAVs should assist data collection operations through a more insightful and automated guidance thanks to advanced sensing capabilities. In order to achieve this vision, two challenges must be addressed though. The first one is to achieve a sufficient autonomy, both in terms of navigation and of interpretation of the data sensed. The second one relates to the reliability of the UAV with respect to accidental (safety) or even malicious (security) risks. This however requires the design and development of new embedded architectures for drones to be more autonomous, while mitigating the harm they may potentially cause. We claim that the increased complexity and flexibility of such platforms requires resorting to modelling, simulation, or formal verification techniques in order to validate such critical aspects of the platform. This paper first discusses the potential and challenges faced by autonomous UAVs for data acquisition. The design of a flexible and adaptable embedded UAV architecture is then addressed. Finally, the need for validating the properties of the platform is discussed. Our approach is sketched and illustrated with the example of a lightweight drone performing 3D reconstructions out of the combination of 2D image acquisition and a specific motion control.

  12. Development of an Effective System Identification and Control Capability for Quad-copter UAVs

    NASA Astrophysics Data System (ADS)

    Wei, Wei

    In recent years, with the promise of extensive commercial applications, the popularity of Unmanned Aerial Vehicles (UAVs) has dramatically increased as witnessed by publications and mushrooming research and educational programs. Over the years, multi-copter aircraft have been chosen as a viable configuration for small-scale VTOL UAVs in the form of quad-copters, hexa-copters and octo-copters. Compared to the single main rotor configuration such as the conventional helicopter, multi-copter airframes require a simpler feedback control system and fewer mechanical parts. These characteristics make these UAV platforms, such as quad-copter which is the main emphasis in this dissertation, a rugged and competitive candidate for many applications in both military and civil areas. Because of its configuration and relative size, the small-scale quad-copter UAV system is inherently very unstable. In order to develop an effective control system through simulation techniques, obtaining an accurate dynamic model of a given quad-copter is imperative. Moreover, given the anticipated stringent safety requirements, fault tolerance will be a crucial component of UAV certification. Accurate dynamic modeling and control of this class of UAV is an enabling technology and is imperative for future commercial applications. In this work, the dynamic model of a quad-copter system in hover flight was identified using frequency-domain system identification techniques. A new and unique experimental system, data acquisition and processing procedure was developed catering specifically to the class of electric powered multi-copter UAV systems. The Comprehensive Identification from FrEquency Responses (CIFER RTM) software package, developed by US Army Aviation Development Directorate -- AFDD, was utilized along with flight tests to develop dynamic models of the quad-copter system. A new set of flight tests were conducted and the predictive capability of the dynamic models were successfully validated. A PID controller and two fuzzy logic controllers were developed based on the validated dynamic models. The controller performances were evaluated and compared in both simulation environment and flight testing. Flight controllers were optimized to comply with US Aeronautical Design Standard Performance Specification Handling Quality Requirements for Military Rotorcraft (ADS-33E-PRF). Results showed a substantial improvement for developed controllers when compared to the nominal controllers based on hand tuning. The scope of this research involves experimental system hardware and software development, flight instrumentation, flight testing, dynamics modeling, system identification, dynamic model validation, control system modeling using PID and fuzzy logic, analysis of handling qualities, flight control optimization and validation. Both closed-loop and open-loop dynamics of the quad-copter system were analyzed. A cost-effective and high quality system identification procedure was applied and results proved in simulations as well as in flight tests.

  13. Methods and Apparatus for Autonomous Robotic Control

    NASA Technical Reports Server (NTRS)

    Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)

    2017-01-01

    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.

  14. Design of Smart Multi-Functional Integrated Aviation Photoelectric Payload

    NASA Astrophysics Data System (ADS)

    Zhang, X.

    2018-04-01

    To coordinate with the small UAV at reconnaissance mission, we've developed a smart multi-functional integrated aviation photoelectric payload. The payload weighs only 1kg, and has a two-axis stabilized platform with visible task payload, infrared task payload, laser pointers and video tracker. The photoelectric payload could complete the reconnaissance tasks above the target area (including visible and infrared). Because of its light weight, small size, full-featured, high integrated, the constraints of the UAV platform carrying the payload will be reduced a lot, which helps the payload suit for more extensive using occasions. So all users of this type of smart multi-functional integrated aviation photoelectric payload will do better works on completion of the ground to better pinpoint targets, artillery calibration, assessment of observe strike damage, customs officials and other tasks.

  15. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.

    PubMed

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-13

    The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  16. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs

    PubMed Central

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-01

    The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter (EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. PMID:29342856

  17. DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs

    DTIC Science & Technology

    2015-12-04

    for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection

  18. A new stratospheric sounding platform based on unmanned aerial vehicle (UAV) droppable from meteorological balloon

    NASA Astrophysics Data System (ADS)

    Efremov, Denis; Khaykin, Sergey; Lykov, Alexey; Berezhko, Yaroslav; Lunin, Aleksey

    High-resolution measurements of climate-relevant trace gases and aerosols in the upper troposphere and stratosphere (UTS) have been and remain technically challenging. The high cost of measurements onboard airborne platforms or heavy stratospheric balloons results in a lack of accurate information on vertical distribution of atmospheric constituents. Whereas light-weight instruments carried by meteorological balloons are becoming progressively available, their usage is constrained by the cost of the equipment or the recovery operations. The evolving need in cost-efficient observations for UTS process studies has led to development of small airborne platforms - unmanned aerial vehicles (UAV), capable of carrying small sensors for in-situ measurements. We present a new UAV-based stratospheric sounding platform capable of carrying scientific payload of up to 2 kg. The airborne platform comprises of a latex meteorological balloon and detachable flying wing type UAV with internal measurement controller. The UAV is launched on a balloon to stratospheric altitudes up to 20 km, where it can be automatically released by autopilot or by a remote command sent from the ground control. Having been released from the balloon the UAV glides down and returns to the launch position. Autopilot using 3-axis gyro, accelerometer, barometer, compas and GPS navigation provides flight stabilization and optimal way back trajectory. Backup manual control is provided for emergencies. During the flight the onboard measurement controller stores the data into internal memory and transmits current flight parameters to the ground station via telemetry. Precise operation of the flight control systems ensures safe landing at the launch point. A series of field tests of the detachable stratospheric UAV has been conducted. The scientific payload included the following instruments involved in different flights: a) stratospheric Lyman-alpha hygrometer (FLASH); b) backscatter sonde; c) electrochemical ozone sonde; d) optical CO2 sensor; e) radioactivity sensor; f) solar radiation sensor. In addition, each payload included temperature sensor, barometric sensor and a GPS receiver. Design features of measurement systems onboard UAV and flight results are presented. Possible applications for atmospheric studies and validation of remote ground-based and space-borne observations is discussed.

  19. Advanced Doppler radar physiological sensing technique for drone detection

    NASA Astrophysics Data System (ADS)

    Yoon, Ji Hwan; Xu, Hao; Garcia Carrillo, Luis R.

    2017-05-01

    A 24 GHz medium-range human detecting sensor, using the Doppler Radar Physiological Sensing (DRPS) technique, which can also detect unmanned aerial vehicles (UAVs or drones), is currently under development for potential rescue and anti-drone applications. DRPS systems are specifically designed to remotely monitor small movements of non-metallic human tissues such as cardiopulmonary activity and respiration. Once optimized, the unique capabilities of DRPS could be used to detect UAVs. Initial measurements have shown that DRPS technology is able to detect moving and stationary humans, as well as largely non-metallic multi-rotor drone helicopters. Further data processing will incorporate pattern recognition to detect multiple signatures (motor vibration and hovering patterns) of UAVs.

  20. Improved estimation of leaf area index and leaf chlorophyll content of a potato crop using multi-angle spectral data - potential of unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Roosjen, Peter P. J.; Brede, Benjamin; Suomalainen, Juha M.; Bartholomeus, Harm M.; Kooistra, Lammert; Clevers, Jan G. P. W.

    2018-04-01

    In addition to single-angle reflectance data, multi-angular observations can be used as an additional information source for the retrieval of properties of an observed target surface. In this paper, we studied the potential of multi-angular reflectance data for the improvement of leaf area index (LAI) and leaf chlorophyll content (LCC) estimation by numerical inversion of the PROSAIL model. The potential for improvement of LAI and LCC was evaluated for both measured data and simulated data. The measured data was collected on 19 July 2016 by a frame-camera mounted on an unmanned aerial vehicle (UAV) over a potato field, where eight experimental plots of 30 × 30 m were designed with different fertilization levels. Dozens of viewing angles, covering the hemisphere up to around 30° from nadir, were obtained by a large forward and sideways overlap of collected images. Simultaneously to the UAV flight, in situ measurements of LAI and LCC were performed. Inversion of the PROSAIL model was done based on nadir data and based on multi-angular data collected by the UAV. Inversion based on the multi-angular data performed slightly better than inversion based on nadir data, indicated by the decrease in RMSE from 0.70 to 0.65 m2/m2 for the estimation of LAI, and from 17.35 to 17.29 μg/cm2 for the estimation of LCC, when nadir data were used and when multi-angular data were used, respectively. In addition to inversions based on measured data, we simulated several datasets at different multi-angular configurations and compared the accuracy of the inversions of these datasets with the inversion based on data simulated at nadir position. In general, the results based on simulated (synthetic) data indicated that when more viewing angles, more well distributed viewing angles, and viewing angles up to larger zenith angles were available for inversion, the most accurate estimations were obtained. Interestingly, when using spectra simulated at multi-angular sampling configurations as were captured by the UAV platform (view zenith angles up to 30°), already a huge improvement could be obtained when compared to solely using spectra simulated at nadir position. The results of this study show that the estimation of LAI and LCC by numerical inversion of the PROSAIL model can be improved when multi-angular observations are introduced. However, for the potato crop, PROSAIL inversion for measured data only showed moderate accuracy and slight improvements.

  1. Seamless positioning and navigation by using geo-referenced images and multi-sensor data.

    PubMed

    Li, Xun; Wang, Jinling; Li, Tao

    2013-07-12

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.

  2. Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data

    PubMed Central

    Li, Xun; Wang, Jinling; Li, Tao

    2013-01-01

    Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267

  3. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  4. Multi-Objective Algorithm for Blood Supply via Unmanned Aerial Vehicles to the Wounded in an Emergency Situation

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K. L.

    2016-01-01

    Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood’s temperature model during transportation, the UAVs’ scheduling and routes’ planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood’s temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance. PMID:27163361

  5. Multi-objective four-dimensional vehicle motion planning in large dynamic environments.

    PubMed

    Wu, Paul P-Y; Campbell, Duncan; Merz, Torsten

    2011-06-01

    This paper presents Multi-Step A∗ (MSA∗), a search algorithm based on A∗ for multi-objective 4-D vehicle motion planning (three spatial and one time dimensions). The research is principally motivated by the need for offline and online motion planning for autonomous unmanned aerial vehicles (UAVs). For UAVs operating in large dynamic uncertain 4-D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and a grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles, and the rules of the air. It is shown that MSA∗ finds a cost optimal solution using variable length, angle, and velocity trajectory segments. These segments are approximated with a grid-based cell sequence that provides an inherent tolerance to uncertainty. The computational efficiency is achieved by using variable successor operators to create a multiresolution memory-efficient lattice sampling structure. The simulation studies on the UAV flight planning problem show that MSA∗ meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of a vector neighborhood-based A∗.

  6. Critical infrastructure monitoring using UAV imagery

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos

    2016-08-01

    The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.

  7. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  8. Preliminary Study on Earthquake Surface Rupture Extraction from Uav Images

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Wang, X.; Ding, X.; Wu, X.; Dou, A.; Wang, S.

    2018-04-01

    Because of the advantages of low-cost, lightweight and photography under the cloud, UAVs have been widely used in the field of seismic geomorphology research in recent years. Earthquake surface rupture is a typical seismic tectonic geomorphology that reflects the dynamic and kinematic characteristics of crustal movement. The quick identification of earthquake surface rupture is of great significance for understanding the mechanism of earthquake occurrence, disasters distribution and scale. Using integrated differential UAV platform, series images were acquired with accuracy POS around the former urban area (Qushan town) of Beichuan County as the area stricken seriously by the 2008 Wenchuan Ms8.0 earthquake. Based on the multi-view 3D reconstruction technique, the high resolution DSM and DOM are obtained from differential UAV images. Through the shade-relief map and aspect map derived from DSM, the earthquake surface rupture is extracted and analyzed. The results show that the surface rupture can still be identified by using the UAV images although the time of earthquake elapse is longer, whose middle segment is characterized by vertical movement caused by compression deformation from fault planes.

  9. The View from a Few Hundred Feet : A New Transparent and Integrated Workflow for UAV-collected Data

    NASA Astrophysics Data System (ADS)

    Peterson, F. S.; Barbieri, L.; Wyngaard, J.

    2015-12-01

    Unmanned Aerial Vehicles (UAVs) allow scientists and civilians to monitor earth and atmospheric conditions in remote locations. To keep up with the rapid evolution of UAV technology, data workflows must also be flexible, integrated, and introspective. Here, we present our data workflow for a project to assess the feasibility of detecting threshold levels of methane, carbon-dioxide, and other aerosols by mounting consumer-grade gas analysis sensors on UAV's. Particularly, we highlight our use of Project Jupyter, a set of open-source software tools and documentation designed for developing "collaborative narratives" around scientific workflows. By embracing the GitHub-backed, multi-language systems available in Project Jupyter, we enable interaction and exploratory computation while simultaneously embracing distributed version control. Additionally, the transparency of this method builds trust with civilians and decision-makers and leverages collaboration and communication to resolve problems. The goal of this presentation is to provide a generic data workflow for scientific inquiries involving UAVs and to invite the participation of the AGU community in its improvement and curation.

  10. Analysis of Landslide Kinematics using Multi-temporal UAV Imagery, La Honda, California

    NASA Astrophysics Data System (ADS)

    Carey, J.; Pickering, A.; Prentice, C. S.; Pinter, N.; DeLong, S.

    2017-12-01

    High-resolution topographic data are vital to studies of earth-surface processes. The combination of unmanned aerial vehicle (UAV) photography and structure-from-motion (SfM) digital photogrammetry provide a quickly deployable and cost-effective method for monitoring geomorphic change and landscape evolution. We acquired imagery of an active landslide in La Honda, California using a GPS-enabled quadcopter UAV with a 12.4 megapixel camera. Deep-seated landslides were previously documented in this region during the winter of 1997-98, with movement recurring and the landslide expanding during the winters of 2004-05 and 2005-06. This study documents the kinematics of a new and separate landslide immediately adjacent to the previous ones, throughout the winter of 2016-17. The roughly triangular-shaped, deep-seated landslide covers an area of approximately 10,000 m2. The area is underlain by SW dipping late Miocene to Pliocene sandstones and mudstones. A 3 m high head scarp stretches along the northeast portion of the slide for approximately 100 m. Internally, the direction of movement is towards the southwest, with two prominent NW-SE striking extensional grabens and numerous tension cracks across the landslide body. Here we calculate displaced landslide volumes and surface displacements from multi-temporal UAV surveys. Photogrammetric reconstruction of UAV/SfM-derived point clouds allowed creation of six digital elevation models (DEMs) with spatial resolutions ranging from 3 to 15 cm per pixel. We derived displacement magnitude, direction and rate by comparing multiple generations of DEMs and orthophotos, and estimated displaced volumes by differencing subsequent DEMs. We then correlated displacements with total rainfall and rainfall intensity measurements. Detailed geomorphic maps identify major landslide features, documenting dominant surface processes. Additionally, we compare the accuracy of the UAV/SfM-derived DEM with a DEM sourced from a synchronous terrestrial lidar survey. Conservative measurements yield 5.4 m of maximum horizontal displacement across the central portion of the slide. This study demonstrates the ability of the UAV/SfM workflow to map and monitor active mass-wasting processes in regions where landslides pose a direct threat to the surrounding community.

  11. Benchmarking real-time RGBD odometry for light-duty UAVs

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.

    2016-06-01

    This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.

  12. Multisensor Equipped Uav/ugv for Automated Exploration

    NASA Astrophysics Data System (ADS)

    Batzdorfer, S.; Bobbe, M.; Becker, M.; Harms, H.; Bestmann, U.

    2017-08-01

    The usage of unmanned systems for exploring disaster scenarios has become more and more important in recent times as a supporting system for action forces. These systems have to offer a well-balanced relationship between the quality of support and additional workload. Therefore within the joint research project ANKommEn - german acronym for Automated Navigation and Communication for Exploration - a system for exploration of disaster scenarios is build-up using multiple UAV und UGV controlled via a central ground station. The ground station serves as user interface for defining missions and tasks conducted by the unmanned systems, equipped with different environmental sensors like cameras - RGB as well as IR - or LiDAR. Depending on the exploration task results, in form of pictures, 2D stitched orthophoto or LiDAR point clouds will be transmitted via datalinks and displayed online at the ground station or will be processed in short-term after a mission, e.g. 3D photogrammetry. For mission planning and its execution, UAV/UGV monitoring and georeferencing of environmental sensor data, reliable positioning and attitude information is required. This is gathered using an integrated GNSS/IMU positioning system. In order to increase availability of positioning information in GNSS challenging scenarios, a GNSS-Multiconstellation based approach is used, amongst others. The present paper focuses on the overall system design including the ground station and sensor setups on the UAVs and UGVs, the underlying positioning techniques as well as 2D and 3D exploration based on a RGB camera mounted on board the UAV and its evaluation based on real world field tests.

  13. Solid images generated from UAVs to analyze areas affected by rock falls

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Baldo, Marco

    2015-04-01

    The study of rock fall affected areas is usually based on the recognition of principal joints families and the localization of potential instable sectors. This requires the acquisition of field data, although as the areas are barely accessible and field inspections are often very dangerous. For this reason, remote sensing systems can be considered as suitable alternative. Recently, Unmanned Aerial Vehicles (UAVs) have been proposed as platform to acquire the necessary information. Indeed, mini UAVs (in particular in the multi-rotors configuration) provide versatility for the acquisition from different points of view a large number of high resolution optical images, which can be used to generate high resolution digital models relevant to the study area. By considering the recent development of powerful user-friendly software and algorithms to process images acquired from UAVs, there is now a need to establish robust methodologies and best-practice guidelines for correct use of 3D models generated in the context of rock fall scenarios. In this work, we show how multi-rotor UAVs can be used to survey areas by rock fall during real emergency contexts. We present two examples of application located in northwestern Italy: the San Germano rock fall (Piemonte region) and the Moneglia rock fall (Liguria region). We acquired data from both terrestrial LiDAR and UAV, in order to compare digital elevation models generated with different remote sensing approaches. We evaluate the volume of the rock falls, identify the areas potentially unstable, and recognize the main joints families. The use on is not so developed but probably this approach can be considered the better solution for a structural investigation of large rock walls. We propose a methodology that jointly considers the Structure from Motion (SfM) approach for the generation of 3D solid images, and a geotechnical analysis for the identification of joint families and potential failure planes.

  14. Auditory decision aiding in supervisory control of multiple unmanned aerial vehicles.

    PubMed

    Donmez, Birsen; Cummings, M L; Graham, Hudson D

    2009-10-01

    This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.

  15. Remotely Piloted Aircraft Systems (RPAS) for high resolution topography and monitoring: civil protection purposes on hydrogeological contexts

    NASA Astrophysics Data System (ADS)

    Bertacchini, Eleonora; Castagnetti, Cristina; Corsini, Alessandro; De Cono, Stefano

    2014-10-01

    The proposed work concerns the analysis of Remotely Piloted Aircraft Systems (RPAS), also known as drones, UAV (Unmanned Aerial Vehicle) or UAS (Unmanned Aerial System), on hydrogeological contexts for civil protection purposes, underlying the advantages of using a flexible and relatively low cost system. The capabilities of photogrammetric RPAS multi-sensors platform were examined in term of mapping, creation of orthophotos, 3D models generation, data integration into a 3D GIS (Geographic Information System) and validation through independent techniques such as GNSS (Global Navigation Satellite System). The RPAS used (multirotor OktoXL, of the Mikrokopter) was equipped with a GPS (Global Positioning System) receiver, digital cameras for photos and videos, an inertial navigation system, a radio device for communication and telemetry, etc. This innovative way of viewing and understanding the environment showed huge potentialities for the study of the territory, and due to its characteristics could be well integrated with aircraft surveys. However, such characteristics seem to give priority to local applications for rigorous and accurate analysis, while it remains a means of expeditious investigation for more extended areas. According to civil protection purposes, the experimentation was carried out by simulating operational protocols, for example for inspection, surveillance, monitoring, land mapping, georeferencing methods (with or without Ground Control Points - GCP) based on high resolution topography (2D and 3D information).

  16. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  17. Rapid Extraction of Landslide and Spatial Distribution Analysis after Jiuzhaigou Ms7.0 Earthquake Based on Uav Images

    NASA Astrophysics Data System (ADS)

    Jiao, Q. S.; Luo, Y.; Shen, W. H.; Li, Q.; Wang, X.

    2018-04-01

    Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV) and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds) algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.

  18. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.

    2016-06-01

    This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.

  19. Unmanned Aerial Vehicles Produce High-Resolution Seasonally-Relevant Imagery for Classifying Wetland Vegetation

    NASA Astrophysics Data System (ADS)

    Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.

    2015-08-01

    With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.

  20. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    NASA Astrophysics Data System (ADS)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  1. Anisotropy of Human Horizontal and Vertical Navigation in Real Space: Behavioral and PET Correlates.

    PubMed

    Zwergal, Andreas; Schöberl, Florian; Xiong, Guoming; Pradhan, Cauchy; Covic, Aleksandar; Werner, Philipp; Trapp, Christoph; Bartenstein, Peter; la Fougère, Christian; Jahn, Klaus; Dieterich, Marianne; Brandt, Thomas

    2016-10-17

    Spatial orientation was tested during a horizontal and vertical real navigation task in humans. Video tracking of eye movements was used to analyse the behavioral strategy and combined with simultaneous measurements of brain activation and metabolism ([18F]-FDG-PET). Spatial navigation performance was significantly better during horizontal navigation. Horizontal navigation was predominantly visually and landmark-guided. PET measurements indicated that glucose metabolism increased in the right hippocampus, bilateral retrosplenial cortex, and pontine tegmentum during horizontal navigation. In contrast, vertical navigation was less reliant on visual and landmark information. In PET, vertical navigation activated the bilateral hippocampus and insula. Direct comparison revealed a relative activation in the pontine tegmentum and visual cortical areas during horizontal navigation and in the flocculus, insula, and anterior cingulate cortex during vertical navigation. In conclusion, these data indicate a functional anisotropy of human 3D-navigation in favor of the horizontal plane. There are common brain areas for both forms of navigation (hippocampus) as well as unique areas such as the retrosplenial cortex, visual cortex (horizontal navigation), flocculus, and vestibular multisensory cortex (vertical navigation). Visually guided landmark recognition seems to be more important for horizontal navigation, while distance estimation based on vestibular input might be more relevant for vertical navigation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  3. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Zhao, S; Chen, Y

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method whilemore » the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and quality of 3D reconstruction, the efficiency in dose planning and accuracy in navigation all can be improved simultaneously.« less

  4. Exploring Transformations in Caribbean Indigenous Social Networks through Visibility Studies: the Case of Late Pre-Colonial Landscapes in East-Guadeloupe (French West Indies).

    PubMed

    Brughmans, Tom; de Waal, Maaike S; Hofman, Corinne L; Brandes, Ulrik

    2018-01-01

    This paper presents a study of the visual properties of natural and Amerindian cultural landscapes in late pre-colonial East-Guadeloupe and of how these visual properties affected social interactions. Through a review of descriptive and formal visibility studies in Caribbean archaeology, it reveals that the ability of visual properties to affect past human behaviour is frequently evoked but the more complex of these hypotheses are rarely studied formally. To explore such complex hypotheses, the current study applies a range of techniques: total viewsheds, cumulative viewsheds, visual neighbourhood configurations and visibility networks. Experiments were performed to explore the control of seascapes, the functioning of hypothetical smoke signalling networks, the correlation of these visual properties with stylistic similarities of material culture found at sites and the change of visual properties over time. The results of these experiments suggest that only few sites in Eastern Guadeloupe are located in areas that are particularly suitable to visually control possible sea routes for short- and long-distance exchange; that visual control over sea areas was not a factor of importance for the existence of micro-style areas; that during the early phase of the Late Ceramic Age networks per landmass are connected and dense and that they incorporate all sites, a structure that would allow hypothetical smoke signalling networks; and that the visual properties of locations of the late sites Morne Souffleur and Morne Cybèle-1 were not ideal for defensive purposes. These results led us to propose a multi-scalar hypothesis for how lines of sight between settlements in the Lesser Antilles could have structured past human behaviour: short-distance visibility networks represent the structuring of navigation and communication within landmasses, whereas the landmasses themselves served as focal points for regional navigation and interaction. We conclude by emphasising that since our archaeological theories about visual properties usually take a multi-scalar landscape perspective, there is a need for this perspective to be reflected in our formal visibility methods as is made possible by the methods used in this paper.

  5. Visual navigation in adolescents with early periventricular lesions: knowing where, but not getting there.

    PubMed

    Pavlova, Marina; Sokolov, Alexander; Krägeloh-Mann, Ingeborg

    2007-02-01

    Visual navigation in familiar and unfamiliar surroundings is an essential ingredient of adaptive daily life behavior. Recent brain imaging work helps to recognize that establishing connectivity between brain regions is of importance for successful navigation. Here, we ask whether the ability to navigate is impaired in adolescents who were born premature and suffer congenital bilateral periventricular brain damage that might affect the pathways interconnecting subcortical structures with cortex. Performance on a set of visual labyrinth tasks was significantly worse in patients with periventricular leukomalacia (PVL) as compared with premature-born controls without lesions and term-born adolescents. The ability for visual navigation inversely relates to the severity of motor disability, leg-dominated bilateral spastic cerebral palsy. This agrees with the view that navigation ability substantially improves with practice and might be compromised in individuals with restrictions in active spatial exploration. Visual navigation is negatively linked to the volumetric extent of lesions over the right parietal and frontal periventricular regions. Whereas impairments of visual processing of point-light biological motion are associated in patients with PVL with bilateral parietal periventricular lesions, navigation ability is specifically linked to the frontal lesions in the right hemisphere. We suggest that more anterior periventricular lesions impair the interrelations between the right hippocampus and cortical areas leading to disintegration of neural networks engaged in visual navigation. For the first time, we show that the severity of right frontal periventricular damage and leg-dominated motor disorders can serve as independent predictors of the visual navigation disability.

  6. Polar Cooperative Navigation Algorithm for Multi-Unmanned Underwater Vehicles Considering Communication Delays.

    PubMed

    Yan, Zheping; Wang, Lu; Wang, Tongda; Yang, Zewen; Chen, Tao; Xu, Jian

    2018-03-30

    To solve the navigation accuracy problems of multi-Unmanned Underwater Vehicles (multi-UUVs) in the polar region, a polar cooperative navigation algorithm for multi-UUVs considering communication delays is proposed in this paper. UUVs are important pieces of equipment in ocean engineering for marine development. For UUVs to complete missions, precise navigation is necessary. It is difficult for UUVs to establish true headings because of the rapid convergence of Earth meridians and the severe polar environment. Based on the polar grid navigation algorithm, UUV navigation in the polar region can be accomplished with the Strapdown Inertial Navigation System (SINS) in the grid frame. To save costs, a leader-follower type of system is introduced in this paper. The leader UUV helps the follower UUVs to achieve high navigation accuracy. Follower UUVs correct their own states based on the information sent by the leader UUV and the relative position measured by ultra-short baseline (USBL) acoustic positioning. The underwater acoustic communication delay is quantized by the model. In this paper, considering underwater acoustic communication delay, the conventional adaptive Kalman filter (AKF) is modified to adapt to polar cooperative navigation. The results demonstrate that the polar cooperative navigation algorithm for multi-UUVs that considers communication delays can effectively navigate the sailing of multi-UUVs in the polar region.

  7. Polar Cooperative Navigation Algorithm for Multi-Unmanned Underwater Vehicles Considering Communication Delays

    PubMed Central

    Yan, Zheping; Wang, Lu; Wang, Tongda; Yang, Zewen; Chen, Tao; Xu, Jian

    2018-01-01

    To solve the navigation accuracy problems of multi-Unmanned Underwater Vehicles (multi-UUVs) in the polar region, a polar cooperative navigation algorithm for multi-UUVs considering communication delays is proposed in this paper. UUVs are important pieces of equipment in ocean engineering for marine development. For UUVs to complete missions, precise navigation is necessary. It is difficult for UUVs to establish true headings because of the rapid convergence of Earth meridians and the severe polar environment. Based on the polar grid navigation algorithm, UUV navigation in the polar region can be accomplished with the Strapdown Inertial Navigation System (SINS) in the grid frame. To save costs, a leader-follower type of system is introduced in this paper. The leader UUV helps the follower UUVs to achieve high navigation accuracy. Follower UUVs correct their own states based on the information sent by the leader UUV and the relative position measured by ultra-short baseline (USBL) acoustic positioning. The underwater acoustic communication delay is quantized by the model. In this paper, considering underwater acoustic communication delay, the conventional adaptive Kalman filter (AKF) is modified to adapt to polar cooperative navigation. The results demonstrate that the polar cooperative navigation algorithm for multi-UUVs that considers communication delays can effectively navigate the sailing of multi-UUVs in the polar region. PMID:29601537

  8. Guidance and Navigation Software Architecture Design for the Autonomous Multi-Agent Physically Interacting Spacecraft (AMPHIS) Test Bed

    DTIC Science & Technology

    2006-12-01

    NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI

  9. Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Queen, Steven M.; Sanner, Kurt Gregory

    2011-01-01

    One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.

  10. On decentralized adaptive full-order sliding mode control of multiple UAVs.

    PubMed

    Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin

    2017-11-01

    In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Sense and avoid technology for Global Hawk and Predator UAVs

    NASA Astrophysics Data System (ADS)

    McCalmont, John F.; Utt, James; Deschenes, Michael; Taylor, Michael J.

    2005-05-01

    The Sensors Directorate at the Air Force Research Laboratory (AFRL) along with Defense Research Associates, Inc. (DRA) conducted a flight demonstration of technology that could potentially satisfy the Federal Aviation Administration's (FAA) requirement for Unmanned Aerial Vehicles (UAVs) to sense and avoid local air traffic sufficient to provide an "...equivalent level of safety, comparable to see-and-avoid requirements for manned aircraft". This FAA requirement must be satisfied for autonomous UAV operation within the national airspace. The real-time on-board system passively detects approaching aircraft, both cooperative and non-cooperative, using imaging sensors operating in the visible/near infrared band and a passive moving target indicator algorithm. Detection range requirements for RQ-4 and MQ-9 UAVs were determined based on analysis of flight geometries, avoidance maneuver timelines, system latencies and human pilot performance. Flight data and UAV operating parameters were provided by the system program offices, prime contractors, and flight-test personnel. Flight demonstrations were conducted using a surrogate UAV (Aero Commander) and an intruder aircraft (Beech Bonanza). The system demonstrated target detection ranges out to 3 nautical miles in nose-to-nose scenarios and marginal visual meteorological conditions. (VMC) This paper will describe the sense and avoid requirements definition process and the system concept (sensors, algorithms, processor, and flight rest results) that has demonstrated the potential to satisfy the FAA sense and avoid requirements.

  12. A method for using unmanned aerial vehicles for emergency investigation of single geo-hazards and sample applications of this method

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Long, Jingjing; Yi, Wu; Yi, Qinglin; Zhang, Guodong; Lei, Bangjun

    2017-11-01

    In recent years, unmanned aerial vehicles (UAVs) have become widely used in emergency investigations of major natural hazards over large areas; however, UAVs are less commonly employed to investigate single geo-hazards. Based on a number of successful investigations in the Three Gorges Reservoir area, China, a complete UAV-based method for performing emergency investigations of single geo-hazards is described. First, a customized UAV system that consists of a multi-rotor UAV subsystem, an aerial photography subsystem, a ground control subsystem and a ground surveillance subsystem is described in detail. The implementation process, which includes four steps, i.e., indoor preparation, site investigation, on-site fast processing and application, and indoor comprehensive processing and application, is then elaborated, and two investigation schemes, automatic and manual, that are used in the site investigation step are put forward. Moreover, some key techniques and methods - e.g., the layout and measurement of ground control points (GCPs), route planning, flight control and image collection, and the Structure from Motion (SfM) photogrammetry processing - are explained. Finally, three applications are given. Experience has shown that using UAVs for emergency investigation of single geo-hazards greatly reduces the time, intensity and risks associated with on-site work and provides valuable, high-accuracy, high-resolution information that supports emergency responses.

  13. UAV field demonstration of social media enabled tactical data link

    NASA Astrophysics Data System (ADS)

    Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.

    2015-05-01

    This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.

  14. Comparison of Unmanned Aerial Vehicle Technology Versus Standard Practice in Identification of Hazards at a Mass Casualty Incident Scenario by Primary Care Paramedic Students.

    PubMed

    Jain, Trevor; Sibley, Aaron; Stryhn, Henrik; Hubloue, Ives

    2018-01-31

    Introduction The proliferation of unmanned aerial vehicles (UAV) has the potential to change the situational awareness of incident commanders allowing greater scene safety. The aim of this study was to compare UAV technology to standard practice (SP) in hazard identification during a simulated multi-vehicle motor collision (MVC) in terms of time to identification, accuracy and the order of hazard identification. A prospective observational cohort study was conducted with 21 students randomized into UAV or SP group, based on a MVC with 7 hazards. The UAV group remained at the UAV ground station while the SP group approached the scene. After identifying hazards the time and order was recorded. The mean time (SD, range) to identify the hazards were 3 minutes 41 seconds (1 minute 37 seconds, 1 minute 48 seconds-6 minutes 51 seconds) and 2 minutes 43 seconds (55 seconds, 1 minute 43 seconds-4 minutes 38 seconds) in UAV and SP groups corresponding to a mean difference of 58 seconds (P=0.11). A non-parametric permutation test showed a significant (P=0.04) difference in identification order. Both groups had 100% accuracy in hazard identification with no statistical difference in time for hazard identification. A difference was found in the identification order of hazards. (Disaster Med Public Health Preparedness. 2018;page 1 of 4).

  15. Multiple UAV Cooperation for Wildfire Monitoring

    NASA Astrophysics Data System (ADS)

    Lin, Zhongjie

    Wildfires have been a major factor in the development and management of the world's forest. An accurate assessment of wildfire status is imperative for fire management. This thesis is dedicated to the topic of utilizing multiple unmanned aerial vehicles (UAVs) to cooperatively monitor a large-scale wildfire. This is achieved through wildfire spreading situation estimation based on on-line measurements and wise cooperation strategy to ensure efficiency. First, based on the understanding of the physical characteristics of the wildfire propagation behavior, a wildfire model and a Kalman filter-based method are proposed to estimate the wildfire rate of spread and the fire front contour profile. With the enormous on-line measurements from on-board sensors of UAVs, the proposed method allows a wildfire monitoring mission to benefit from on-line information updating, increased flexibility, and accurate estimation. An independent wildfire simulator is utilized to verify the effectiveness of the proposed method. Second, based on the filter analysis, wildfire spreading situation and vehicle dynamics, the influence of different cooperation strategies of UAVs to the overall mission performance is studied. The multi-UAV cooperation problem is formulated in a distributed network. A consensus-based method is proposed to help address the problem. The optimal cooperation strategy of UAVs is obtained through mathematical analysis. The derived optimal cooperation strategy is then verified in an independent fire simulation environment to verify its effectiveness.

  16. Cubesats and drones: bridging the spatio-temporal divide for enhanced earth observation

    NASA Astrophysics Data System (ADS)

    McCabe, M. F.; Aragon, B.; Parkes, S. D.; Mascaro, J.; Houborg, R.

    2017-12-01

    In just the last few years, a range of advances in remote sensing technologies have enabled an unprecedented opportunity in earth observation. Parallel developments in cubesats and unmanned aerial vehicles (UAVs) have overcome one of the outstanding challenges in observing the land surface: the provision of timely retrievals at a spatial resolution that is sufficiently detailed to make field-level decisions. Planet cubesats have revolutionized observing capacity through their objective of near daily global retrieval. These nano-satellite systems provide high resolution (approx. 3 m) retrievals in red-green-blue and near-infrared wavelengths, offering capacity to develop vegetation metrics for both hydrological and precision agricultural applications. Apart from satellite based advances, nearer to earth technology is being exploited for a range of observation needs. UAVs provide an adaptable platform from which a variety of sensing systems can be deployed. Combinations of optical, thermal, multi- and hyper-spectral systems allow for the estimation of a range of land surface variables, including vegetation structure, vegetation health, land surface temperature and evaporation. Here we explore some of these exciting developments in the context of agricultural hydrology, providing examples of cubesat and UAV imagery that has been used to inform upon crop health and water use. An investigation of the spatial and temporal advantage of these complementary systems is undertaken, with examples of multi-day high-resolution vegetation dynamics from cubesats presented alongside diurnal-cycle responses derived from multiple within-day UAV flights.

  17. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  18. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  19. Designing and Testing a UAV Mapping System for Agricultural Field Surveying

    PubMed Central

    Skovsen, Søren

    2017-01-01

    A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35–0.58 m are correlated to the applied nitrogen treatments of 0–300 kgNha. The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations. PMID:29168783

  20. Vision Based Obstacle Detection in Uav Imaging

    NASA Astrophysics Data System (ADS)

    Badrloo, S.; Varshosaz, M.

    2017-08-01

    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  1. Designing and Testing a UAV Mapping System for Agricultural Field Surveying.

    PubMed

    Christiansen, Martin Peter; Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Skovsen, Søren; Gislum, René

    2017-11-23

    A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35-0.58 m are correlated to the applied nitrogen treatments of 0-300 kg N ha . The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations.

  2. DNAism: exploring genomic datasets on the web with Horizon Charts.

    PubMed

    Rio Deiros, David; Gibbs, Richard A; Rogers, Jeffrey

    2016-01-27

    Computational biologists daily face the need to explore massive amounts of genomic data. New visualization techniques can help researchers navigate and understand these big data. Horizon Charts are a relatively new visualization method that, under the right circumstances, maximizes data density without losing graphical perception. Horizon Charts have been successfully applied to understand multi-metric time series data. We have adapted an existing JavaScript library (Cubism) that implements Horizon Charts for the time series domain so that it works effectively with genomic datasets. We call this new library DNAism. Horizon Charts can be an effective visual tool to explore complex and large genomic datasets. Researchers can use our library to leverage these techniques to extract additional insights from their own datasets.

  3. Spatial cell firing during virtual navigation of open arenas by head-restrained mice.

    PubMed

    Chen, Guifen; King, John Andrew; Lu, Yi; Cacucci, Francesca; Burgess, Neil

    2018-06-18

    We present a mouse virtual reality (VR) system which restrains head-movements to horizontal rotations, compatible with multi-photon imaging. This system allows expression of the spatial navigation and neuronal firing patterns characteristic of real open arenas (R). Comparing VR to R: place and grid, but not head-direction, cell firing had broader spatial tuning; place, but not grid, cell firing was more directional; theta frequency increased less with running speed; whereas increases in firing rates with running speed and place and grid cells' theta phase precession were similar. These results suggest that the omni-directional place cell firing in R may require local-cues unavailable in VR, and that the scale of grid and place cell firing patterns, and theta frequency, reflect translational motion inferred from both virtual (visual and proprioceptive) and real (vestibular translation and extra-maze) cues. By contrast, firing rates and theta phase precession appear to reflect visual and proprioceptive cues alone. © 2018, Chen et al.

  4. When mental fatigue maybe characterized by Event Related Potential (P300) during virtual wheelchair navigation.

    PubMed

    Lamti, Hachem A; Gorce, Philippe; Ben Khelifa, Mohamed Moncef; Alimi, Adel M

    2016-12-01

    The goal of this study is to investigate the influence of mental fatigue on the event related potential P300 features (maximum pick, minimum amplitude, latency and period) during virtual wheelchair navigation. For this purpose, an experimental environment was set up based on customizable environmental parameters (luminosity, number of obstacles and obstacles velocities). A correlation study between P300 and fatigue ratings was conducted. Finally, the best correlated features supplied three classification algorithms which are MLP (Multi Layer Perceptron), Linear Discriminate Analysis and Support Vector Machine. The results showed that the maximum feature over visual and temporal regions as well as period feature over frontal, fronto-central and visual regions were correlated with mental fatigue levels. In the other hand, minimum amplitude and latency features didn't show any correlation. Among classification techniques, MLP showed the best performance although the differences between classification techniques are minimal. Those findings can help us in order to design suitable mental fatigue based wheelchair control.

  5. Networking Multiple Autonomous Air and Ocean Vehicles for Oceanographic Research and Monitoring

    NASA Astrophysics Data System (ADS)

    McGillivary, P. A.; Borges de Sousa, J.; Rajan, K.

    2013-12-01

    Autonomous underwater and surface vessels (AUVs and ASVs) are coming into wider use as components of oceanographic research, including ocean observing systems. Unmanned airborne vehicles (UAVs) are now available at modest cost, allowing multiple UAVs to be deployed with multiple AUVs and ASVs. For optimal use good communication and coordination among vehicles is essential. We report on the use of multiple AUVs networked in communication with multiple UAVs. The UAVs are augmented by inferential reasoning software developed at MBARI that allows UAVs to recognize oceanographic fronts and change their navigation and control. This in turn allows UAVs to automatically to map frontal features, as well as to direct AUVs and ASVs to proceed to such features and conduct sampling via onboard sensors to provide validation for airborne mapping. ASVs can also act as data nodes for communication between UAVs and AUVs, as well as collecting data from onboard sensors, while AUVs can sample the water column vertically. This allows more accurate estimation of phytoplankton biomass and productivity, and can be used in conjunction with UAV sampling to determine air-sea flux of gases (e.g. CO2, CH4, DMS) affecting carbon budgets and atmospheric composition. In particular we describe tests in July 2013 conducted off Sesimbra, Portugal in conjunction with the Portuguese Navy by the University of Porto and MBARI with the goal of tracking large fish in the upper water column with coordinated air/surface/underwater measurements. A thermal gradient was observed in the infrared by a low flying UAV, which was used to dispatch an AUV to obtain ground truth to demonstrate the event-response capabilities using such autonomous platforms. Additional field studies in the future will facilitate integration of multiple unmanned systems into research vessel operations. The strength of hardware and software tools described in this study is to permit fundamental oceanographic measurements of both ocean and atmosphere over temporal and spatial scales that have previously been problematic. The methods demonstrated are particularly suited to the study of oceanographic fronts and for tracking and mapping oil spills or plankton blooms. With the networked coordination of multiple autonomous systems, individual components may be changed out while ocean observations continue, allowing coarse to fine spatial studies of hydrographic features over temporal dimensions that would otherwise be difficult, including diurnal and tidal periods. Constraints on these methods currently involve coordination of data archiving systems into shipboard operating systems, familiarization of oceanographers with these methods, and existing nearshore airspace use constraints on UAVs. An important outcome of these efforts is to understand the methodology for using multiple heterogeneous autonomous vehicles for targeted science exploration.

  6. a Redundant Gnss-Ins Low-Cost Uav Navigation Solution for Professional Applications

    NASA Astrophysics Data System (ADS)

    Navarro, J.; Parés, M. E.; Colomina, I.; Bianchi, G.; Pluchino, S.; Baddour, R.; Consoli, A.; Ayadi, J.; Gameiro, A.; Sekkas, O.; Tsetsos, V.; Gatsos, T.; Navoni, R.

    2015-08-01

    This paper presents the current results for the FP7 GINSEC project. Its goal is to build a pre-commercial prototype of a low-cost, accurate and reliable system for the professional UAV market. Low-cost, in this context, stands for the use of sensors in the most affordable segment of the market, especially MEMS IMUs and GNSS receivers. Reliability applies to the ability of the autopilot to cope with situations where unfavourable GNSS reception conditions or strong electromagnetic fields make the computation of the position and / or attitude of the UAV difficult. Professional and accurate mean that, at least using post-processing techniques as PPP, it will be possible to reach cm-level precisions that open the door to a range of applications demanding high levels of quality in positioning, as precision agriculture or mapping. To achieve such goal, a rigorous sensor error modelling approach, the use of redundant IMUs and a dual-GNSS receiver setup, together with close-coupling techniques and an extended Kalman filter with self-analysis capabilities have been used. Although the project is not yet complete, the results obtained up to now prove the feasibility of the aforementioned goal, especially in those aspects related to position determination. Research work is still undergoing to estimate the heading using a dual-GNNS receiver setup; preliminary results prove the validity of this approach for relatively long baselines, although positive results are expected when these are shorter than 1 m - which is a necessary requisite for small-sized UAVs.

  7. UAV Data Exchange Test Bed for At-Sea and Ashore Information Systems

    DTIC Science & Technology

    2014-12-02

    29 3.2 Visualization using NASA World Wind . . . . . . . . . . . . . . . . . . . . . . . . 30 3.3 Visualization using Quantum GIS...Data Server and the Global Positioning Warehouse 37 4.1 Naval Position Repository Installation . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.2...4.4 Data Exchange between CSD and NPR . . . . . . . . . . . . . . . . . . . . . . . . 41 5 Maritime Tactical Command and Control 43 5.1 Global Command

  8. Multi-objective control for cooperative payload transport with rotorcraft UAVs.

    PubMed

    Gimenez, Javier; Gandolfo, Daniel C; Salinas, Lucio R; Rosales, Claudio; Carelli, Ricardo

    2018-06-01

    A novel kinematic formation controller based on null-space theory is proposed to transport a cable-suspended payload with two rotorcraft UAVs considering collision avoidance, wind perturbations, and properly distribution of the load weight. An accurate 6-DoF nonlinear dynamic model of a helicopter and models for flexible cables and payload are included to test the proposal in a realistic scenario. System stability is demonstrated using Lyapunov theory and several simulation results show the good performance of the approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  10. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    DOE PAGES

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; ...

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less

  11. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  12. Design and implementation for integrated UAV multi-spectral inspection system

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Li, X.; Yan, F.

    2018-04-01

    In order to improve the working efficiency of the transmission line inspection and reduce the labour intensity of the inspectors, this paper presents an Unmanned Aerial Vehicle (UAV) inspection system architecture for the transmission line inspection. In this document, the light-duty design for different inspection equipment and processing terminals is completed. It presents the reference design for the information-processing terminal, supporting the inspection and interactive equipment accessing, and obtains all performance indicators of the inspection information processing through the tests. Practical application shows that the UAV inspection system supports access and management of different types of mainstream fault detection equipment, and can implement the independent diagnosis of the detected information to generate inspection reports in line with industry norms, which can meet the fast, timely, and efficient requirements for the power line inspection work.

  13. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  14. Design of a reconfigurable liquid hydrogen fuel tank for use in the Genii unmanned aerial vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adam, Patrick; Leachman, Jacob

    2014-01-29

    Long endurance flight, on the order of days, is a leading flight performance characteristic for Unmanned Aerial Vehicles (UAVs). Liquid hydrogen (LH2) is well suited to providing multi-day flight times with a specific energy 2.8 times that of conventional kerosene based fuels. However, no such system of LH2 storage, delivery, and use is currently available for commercial UAVs. In this paper, we develop a light weight LH2 dewar for integration and testing in the proton exchange membrane (PEM) fuel cell powered, student designed and constructed, Genii UAV. The fuel tank design is general for scaling to suit various UAV platforms.more » A cylindrical vacuum-jacketed design with removable end caps was chosen to incorporate various fuel level gauging, pressurizing, and slosh mitigation systems. Heat and mechanical loadings were modeled to compare with experimental results. Mass performance of the fuel tank is characterized by the fraction of liquid hydrogen to full tank mass, and the insulation performance was characterized by effective thermal conductivity and boil-off rate.« less

  15. Design of a reconfigurable liquid hydrogen fuel tank for use in the Genii unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Adam, Patrick; Leachman, Jacob

    2014-01-01

    Long endurance flight, on the order of days, is a leading flight performance characteristic for Unmanned Aerial Vehicles (UAVs). Liquid hydrogen (LH2) is well suited to providing multi-day flight times with a specific energy 2.8 times that of conventional kerosene based fuels. However, no such system of LH2 storage, delivery, and use is currently available for commercial UAVs. In this paper, we develop a light weight LH2 dewar for integration and testing in the proton exchange membrane (PEM) fuel cell powered, student designed and constructed, Genii UAV. The fuel tank design is general for scaling to suit various UAV platforms. A cylindrical vacuum-jacketed design with removable end caps was chosen to incorporate various fuel level gauging, pressurizing, and slosh mitigation systems. Heat and mechanical loadings were modeled to compare with experimental results. Mass performance of the fuel tank is characterized by the fraction of liquid hydrogen to full tank mass, and the insulation performance was characterized by effective thermal conductivity and boil-off rate.

  16. Integration of UAV and ground-based Structure from Motion with Multi-View Stereo photogrammetry and hydrological data to quantify hillslope gully erosion processes in tropical savanna

    NASA Astrophysics Data System (ADS)

    Koci, J.; Jarihani, B.; Sidle, R. C.; Wilkinson, S. N.; Bartley, R.

    2017-12-01

    Structure from Motion with Multi-View Stereo (SfM-MVS) photogrammetry provides a cost-effective method of rapidly acquiring high resolution (sub-meter) topographic data, but is rarely used in hydrogeomorphic investigations of gully erosion. This study integrates high resolution topographic and land cover data derived from an unmanned aerial vehicle (UAV) and ground-based SfM-MVS photogrammetry, with rainfall and gully discharge data, to elucidate hydrogeomorphic processes driving hillslope gully erosion. The study is located within a small (13 km2) dry-tropical savanna catchment within the Burdekin River Basin, northeast Australia, which is a major contributor sediments and nutrients to the Great Barrier Reef World Heritage Area. A pre-wet season UAV survey covered an entire hillslope gully system (0.715 km2), and is used to derive topography, ground cover and hydrological flow pathways in the gully contributing area. Ground-based surveys of a single active gully (650 m2) within the broader hillslope are compared between pre- and post-wet season conditions to quantify gully geomorphic change. Rainfall, recorded near to the head of the gully, is related to gully discharge during sporadic storm events. The study provides valuable insights into the relationships among hydrological flow pathways, ground cover, rainfall and runoff, and spatial patterns of gully morphologic change. We demonstrate how UAV and ground-based SfM-MVS photogrammetry can be used to improve hydrogeomorphic process understanding and aid in the modelling and management of hillslope gully systems.

  17. Visual Navigation in Nocturnal Insects.

    PubMed

    Warrant, Eric; Dacke, Marie

    2016-05-01

    Despite their tiny eyes and brains, nocturnal insects have evolved a remarkable capacity to visually navigate at night. Whereas some use moonlight or the stars as celestial compass cues to maintain a straight-line course, others use visual landmarks to navigate to and from their nest. These impressive abilities rely on highly sensitive compound eyes and specialized visual processing strategies in the brain. ©2016 Int. Union Physiol. Sci./Am. Physiol. Soc.

  18. Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.

    PubMed

    de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie

    2017-09-01

    Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.

  19. Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles

    PubMed Central

    Gökçe, Fatih; Üçoluk, Göktürk; Şahin, Erol; Kalkan, Sinan

    2015-01-01

    Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032×778 resolution) and 150 ms outdoors (1280×720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms. PMID:26393599

  20. Preliminary clinical trial in percutaneous nephrolithotomy using a real-time navigation system for percutaneous kidney access

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, A. C. M.; Fonseca, Jaime C.; Lima, Estevão.; Vilaça, João. L.

    2014-03-01

    Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.

  1. Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) from an Unmanned Aerial Vehicle (UAV): Results from the 2014 AROMAT campaign

    NASA Astrophysics Data System (ADS)

    Merlaud, Alexis; Tack, Frederik; Constantin, Daniel; Fayt, Caroline; Maes, Jeroen; Mingireanu, Florin; Mocanu, Ionut; Georgescu, Lucian; Van Roozendael, Michel

    2015-04-01

    The Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) is an instrument dedicated to atmospheric trace gas retrieval from an Unmanned Aerial Vehicle (UAV). The payload is based on a compact visible spectrometer and a scanning mirror to collect scattered sunlight. Its weight, size, and power consumption are respectively 920 g, 27x12x12 cm3, and 6 W. The custom-built 2.5 m flying wing UAV is electrically powered, has a typical airspeed of 100 km/h, and can operate at a maximum altitude of 3 km. Both the payload and the UAV were developed in the framework of a collaboration between the Belgian Institute for Space Aeronomy (BIRA-IASB) and the Dunarea de Jos University of Galati, Romania. We present here SWING-UAV test flights dedicated to NO2 measurements and performed in Romania on 10 and 11 September 2014, during the Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT) campaign. The UAV performed 5 flights in the vicinity of the large thermal power station of Turceni (44.67° N, 23.4° E). The UAV was operated in visual range during the campaign, up to 900 m AGL , downwind of the plant and crossing its exhaust plume. The spectra recorded on flight are analyzed with the Differential Optical Absorption Spectroscopy (DOAS) method. The retrieved NO2 Differential Slant Column Densities (DSCDs) are up to 1.5e17 molec/cm2 and reveal the horizontal gradients around the plant. The DSCDs are converted to vertical columns and compared with coincident car-based DOAS measurements. We also present the near-future perspective of the SWING-UAV observation system, which includes flights in 2015 above the Black Sea to quantify ship emissions, the addition of SO2 as a target species, and autopilot flights at higher altitudes to cover a typical satellite pixel extent (10x10 km2).

  2. Spread Spectrum Applications in Unmanned Aerial Vehicles

    DTIC Science & Technology

    1994-06-01

    Specter can be launched from the ground or F/A-18 and F-16 aircraft. The Specter carries the Advanced Tactical Air Reconnaissance System ( ATARS ...the transition should be easy. While ATARS is Specter’s designated payload, it can carry other payloads weighing up to 400 pounds: electronic...implement a 650 km UAV. The combination of ATARS digital imagery and a real-time data link, together with the Specter’s ability to fly low, fast, navigate

  3. Intent of Study on the Use of a Dual Airborne Laser Scanner (ALS) in Conjunction with a Tactical Grade Inertial Measurement Unit (IMU) for Unmanned Aerial Vehicle (UAV) Navigation and Mapping in Unknown, Non-Global Positioning System (GPS), Environments

    DTIC Science & Technology

    2006-08-05

    ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) AF Office of Scientific h Researc 875 N. Randolph St. Room 3112 11. SPONSOR/MONITOR’S REPORT...the process is often time-consuming and expensive. As the IMU market is experiencing a migration trend towards Micro Electro-Mechanical System (MEMS

  4. Global Versus Reactive Navigation for Joint UAV-UGV Missions in a Cluttered Environment

    DTIC Science & Technology

    2012-06-01

    spaces. The vehicle uses a two- wheel 5 differential drive system with a third omnidirectional caster for balance. This uncomplicated system saves... wheels , two differential drive wheels and one omni- directional caster wheel . The vehicle changes the direction of its movement by altering the speed of...Virtual Speed Versus Time..........64  Figure 23:  Heading and Yaw Rate Versus Time................64  Figure 24:  Individual Wheel Speeds Versus Time

  5. Method for the visualization of landform by mapping using low altitude UAV application

    NASA Astrophysics Data System (ADS)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  6. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    NASA Astrophysics Data System (ADS)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  7. Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling.

    PubMed

    Shahbazi, Mozhdeh; Sohn, Gunho; Théau, Jérôme; Menard, Patrick

    2015-10-30

    The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.

  8. Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling

    PubMed Central

    Shahbazi, Mozhdeh; Sohn, Gunho; Théau, Jérôme; Menard, Patrick

    2015-01-01

    The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing. PMID:26528976

  9. Towards a New Generation of Time-Series Visualization Tools in the ESA Heliophysics Science Archives

    NASA Astrophysics Data System (ADS)

    Perez, H.; Martinez, B.; Cook, J. P.; Herment, D.; Fernandez, M.; De Teodoro, P.; Arnaud, M.; Middleton, H. R.; Osuna, P.; Arviset, C.

    2017-12-01

    During the last decades a varied set of Heliophysics missions have allowed the scientific community to gain a better knowledge on the solar atmosphere and activity. The remote sensing images of missions such as SOHO have paved the ground for Helio-based spatial data visualization software such as JHelioViewer/Helioviewer. On the other hand, the huge amount of in-situ measurements provided by other missions such as Cluster provide a wide base for plot visualization software whose reach is still far from being fully exploited. The Heliophysics Science Archives within the ESAC Science Data Center (ESDC) already provide a first generation of tools for time-series visualization focusing on each mission's needs: visualization of quicklook plots, cross-calibration time series, pre-generated/on-demand multi-plot stacks (Cluster), basic plot zoom in/out options (Ulysses) and easy navigation through the plots in time (Ulysses, Cluster, ISS-Solaces). However, as the needs evolve and the scientists involved in new missions require to plot multi-variable data, heat maps stacks interactive synchronization and axis variable selection among other improvements. The new Heliophysics archives (such as Solar Orbiter) and the evolution of existing ones (Cluster) intend to address these new challenges. This paper provides an overview of the different approaches for visualizing time-series followed within the ESA Heliophysics Archives and their foreseen evolution.

  10. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  11. Mapping Gnss Restricted Environments with a Drone Tandem and Indirect Position Control

    NASA Astrophysics Data System (ADS)

    Cledat, E.; Cucci, D. A.

    2017-08-01

    The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.

  12. Design and control of a vertical takeoff and landing fixed-wing unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Malang, Yasir

    With the goal of extending capabilities of multi-rotor unmanned aerial vehicles (UAVs) for wetland conservation missions, a novel hybrid aircraft design consisting of four tilting rotors and a fixed wing is designed and built. The tilting rotors and nonlinear aerodynamic effects introduce a control challenge for autonomous flight, and the research focus is to develop and validate an autonomous transition flight controller. The overall controller structure consists of separate cascaded Proportional Integral Derivative (PID) controllers whose gains are scheduled according to the rotors' tilt angle. A control mechanism effectiveness factor is used to mix the multi-rotor and fixed-wing control actuators during transition. A nonlinear flight dynamics model is created and transition stability is shown through MATLAB simulations, which proves gain-scheduled control is a good fit for tilt-rotor aircraft. Experiments carried out using the prototype UAV validate simulation results for VTOL and tilted-rotor flight.

  13. Monitoring of rock glacier dynamics by multi-temporal UAV images

    NASA Astrophysics Data System (ADS)

    Morra di Cella, Umberto; Pogliotti, Paolo; Diotri, Fabrizio; Cremonese, Edoardo; Filippa, Gianluca; Galvagno, Marta

    2015-04-01

    During the last years several steps forward have been made in the comprehension of rock glaciers dynamics mainly for their potential evolution into rapid mass movements phenomena. Monitoring the surface movement of creeping mountain permafrost is important for understanding the potential effect of ongoing climate change on such a landforms. This study presents the reconstruction of two years of surface movements and DEM changes obtained by multi-temporal analysis of UAV images (provided by SenseFly Swinglet CAM drone). The movement rate obtained by photogrammetry are compared to those obtained by differential GNSS repeated campaigns on almost fifty points distributed on the rock glacier. Results reveals a very good agreements between both rates velocities obtained by the two methods and vertical displacements on fixed points. Strengths, weaknesses and shrewdness of this methods will be discussed. Such a method is very promising mainly for remote regions with difficult access.

  14. UAV and SfM in Detailed Geomorphological Mapping of Granite Tors: An Example of Starościńskie Skały (Sudetes, SW Poland)

    NASA Astrophysics Data System (ADS)

    Kasprzak, Marek; Jancewicz, Kacper; Michniewicz, Aleksandra

    2017-11-01

    The paper presents an example of using photographs taken by unmanned aerial vehicles (UAV) and processed using the structure from motion (SfM) procedure in a geomorphological study of rock relief. Subject to analysis is a small rock city in the West Sudetes (SW Poland), known as Starościńskie Skały and developed in coarse granite bedrock. The aims of this paper were, first, to compare UAV/SfM-derived data with the cartographical image based on the traditional geomorphological field-mapping methods and the digital elevation model derived from airborne laser scanning (ALS). Second, to test if the proposed combination of UAV and SfM methods may be helpful in recognizing the detailed structure of granite tors. As a result of conducted UAV flights and digital image post-processing in AgiSoft software, it was possible to obtain datasets (dense point cloud, texture model, orthophotomap, bare-ground-type digital terrain model—DTM) which allowed to visualize in detail the surface of the study area. In consequence, it was possible to distinguish even the very small forms of rock surface microrelief: joints, aplite veins, rills and karren, weathering pits, etc., otherwise difficult to map and measure. The study includes also valorization of particular datasets concerning microtopography and allows to discuss indisputable advantages of using the UAV/SfM-based DTM in geomorphic studies of tors and rock cities, even those located within forest as in the presented case study.

  15. A navigation system for the visually impaired using colored navigation lines and RFID tags.

    PubMed

    Seto, First Tatsuya

    2009-01-01

    In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane senses the colored navigation line, and the system informs the visually impaired that he/she is walking along the navigation line by vibration. The color recognition system is controlled by a one-chip microprocessor and this system can discriminate 6 colored navigation lines. RFID tags and a receiver for these tags are used in the map information system. The RFID tags and the RFID tag receiver are also installed on a white cane. The receiver receives tag information and notifies map information to the user by mp3 formatted pre-recorded voice. Three normal subjects who were blindfolded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the map information system was good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.

  16. Development of Hardware-in-the-Loop Simulation Based on Gazebo and Pixhawk for Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Nguyen, Khoa Dang; Ha, Cheolkeun

    2018-04-01

    Hardware-in-the-loop simulation (HILS) is well known as an effective approach in the design of unmanned aerial vehicles (UAV) systems, enabling engineers to test the control algorithm on a hardware board with a UAV model on the software. Performance of HILS is determined by performances of the control algorithm, the developed model, and the signal transfer between the hardware and software. The result of HILS is degraded if any signal could not be transferred to the correct destination. Therefore, this paper aims to develop a middleware software to secure communications in HILS system for testing the operation of a quad-rotor UAV. In our HILS, the Gazebo software is used to generate a nonlinear six-degrees-of-freedom (6DOF) model, sensor model, and 3D visualization for the quad-rotor UAV. Meanwhile, the flight control algorithm is designed and implemented on the Pixhawk hardware. New middleware software, referred to as the control application software (CAS), is proposed to ensure the connection and data transfer between Gazebo and Pixhawk using the multithread structure in Qt Creator. The CAS provides a graphical user interface (GUI), allowing the user to monitor the status of packet transfer, and perform the flight control commands and the real-time tuning parameters for the quad-rotor UAV. Numerical implementations have been performed to prove the effectiveness of the middleware software CAS suggested in this paper.

  17. Using Unmanned Aerial Vehicles (UAVs) to Modeling Tornado Impacts

    NASA Astrophysics Data System (ADS)

    Wagner, M.; Doe, R. K.

    2017-12-01

    Using Unmanned Aerial Vehicles (UAVs) to assess storm damage is a useful research tool. Benefits include their ability to access remote or impassable areas post-storm, identify unknown damages and assist with more detailed site investigations and rescue efforts. Technological advancement of UAVs mean that they can capture high resolution images often at an affordable price. These images can be used to create 3D environments to better interpret and delineate damages from large areas that would have been difficult in ground surveys. This research presents the results of a rapid response site investigation of the 29 April 2017 Canton, Texas, USA, tornado using low cost UAVs. This was a multiple, high impact tornado event measuring EF4 at maximum. Rural farmland was chosen as a challenging location to test both equipment and methodology. Such locations provide multiple impacts at a variety of scales including structural and vegetation damage and even animal fatalities. The 3D impact models allow for a more comprehensive study prior to clean-up. The results show previously unseen damages and better quantify damage impacts at the local level. 3D digital track swaths were created allowing for a more accurate track width determination. These results demonstrate how effective the use of low cost UAVs can be for rapid response storm damage assessments, the high quality of data they can achieve, and how they can help us better visualize tornado site investigations.

  18. UAV Monitoring for Enviromental Management in Galapagos Islands

    NASA Astrophysics Data System (ADS)

    Ballari, D.; Orellana, D.; Acosta, E.; Espinoza, A.; Morocho, V.

    2016-06-01

    In the Galapagos Islands, where 97% of the territory is protected and ecosystem dynamics are highly vulnerable, timely and accurate information is key for decision making. An appropriate monitoring system must meet two key features: on one hand, being able to capture information in a systematic and regular basis, and on the other hand, to quickly gather information on demand for specific purposes. The lack of such a system for geographic information limits the ability of Galapagos Islands' institutions to evaluate and act upon environmental threats such as invasive species spread and vegetation degradation. In this context, the use of UAVs (unmanned aerial vehicles) for capturing georeferenced images is a promising technology for environmental monitoring and management. This paper explores the potential of UAV images for monitoring degradation of littoral vegetation in Puerto Villamil (Isabela Island, Galapagos, Ecuador). Imagery was captured using two camera types: Red Green Blue (RGB) and Infrarred Red Green (NIR). First, vegetation presence was identified through NDVI. Second, object-based classification was carried out for characterization of vegetation vigor. Results demonstrates the feasibility of UAV technology for base-line studies and monitoring on the amount and vigorousness of littoral vegetation in the Galapagos Islands. It is also showed that UAV images are not only useful for visual interpretation and object delineation, but also to timely produce useful thematic information for environmental management.

  19. Health monitoring of unmanned aerial vehicle based on optical fiber sensor array

    NASA Astrophysics Data System (ADS)

    Luo, Yuxiang; Shen, Jingshi; Shao, Fei; Guo, Chunhui; Yang, Ning; Zhang, Jiande

    2017-10-01

    The unmanned aerial vehicle (UAV) in flight needs to face the complicated environment, especially to withstand harsh weather conditions, such as the temperature and pressure. Compared with conventional sensors, fiber Bragg grating (FBG) sensor has the advantages of small size, light weight, high reliability, high precision, anti-electromagnetic interference, long lift-span, moistureproof and good resistance to causticity. It's easy to be embedded in composite structural components of UAVs. In the paper, over 1000 FBG sensors distribute regularly on a wide range of UAVs body, combining wavelength division multiplexing (WDM), time division multiplexing (TDM) and multichannel parallel architecture. WDM has the advantage of high spatial resolution. TDM has the advantage of large capacity and wide range. It is worthful to constitute a sensor network by different technologies. For the signal demodulation of FBG sensor array, WDM works by means of wavelength scanning light sources and F-P etalon. TDM adopts the technology of optical time-domain reflectometry. In order to demodulate efficiently, the most proper sensor multiplex number with some reflectivity is given by the curves fitting. Due to the regular array arrangement of FBG sensors on the UAVs, we can acquire the health state of UAVs in the form of 3D visualization. It is helpful to master the information of health status rapidly and give a real-time health evaluation.

  20. Shigaraki UAV-Radar Experiment (ShUREX): overview of the campaign with some preliminary results

    NASA Astrophysics Data System (ADS)

    Kantha, Lakshmi; Lawrence, Dale; Luce, Hubert; Hashiguchi, Hiroyuki; Tsuda, Toshitaka; Wilson, Richard; Mixa, Tyler; Yabuki, Masanori

    2017-12-01

    The Shigaraki unmanned aerial vehicle (UAV)-Radar Experiment (ShUREX) is an international (USA-Japan-France) observational campaign, whose overarching goal is to demonstrate the utility of small, lightweight, inexpensive, autonomous UAVs in probing and monitoring the lower troposphere and to promote synergistic use of UAVs and very high frequency (VHF) radars. The 2-week campaign lasting from June 1 to June 14, 2015, was carried out at the Middle and Upper Atmosphere (MU) Observatory in Shigaraki, Japan. During the campaign, the DataHawk UAV, developed at the University of Colorado, Boulder, and equipped with high-frequency response cold wire and pitot tube sensors (as well as an iMET radiosonde), was flown near and over the VHF-band MU radar. Measurements in the atmospheric column in the immediate vicinity of the radar were obtained. Simultaneous and continuous operation of the radar in range imaging mode enabled fine-scale structures in the atmosphere to be visualized by the radar. It also permitted the UAV to be commanded to sample interesting structures, guided in near real time by the radar images. This overview provides a description of the ShUREX campaign and some interesting but preliminary results of the very first simultaneous and intensive probing of turbulent structures by UAVs and the MU radar. The campaign demonstrated the validity and utility of the radar range imaging technique in obtaining very high vertical resolution ( 20 m) images of echo power in the atmospheric column, which display evolving fine-scale atmospheric structures in unprecedented detail. The campaign also permitted for the very first time the evaluation of the consistency of turbulent kinetic energy dissipation rates in turbulent structures inferred from the spectral broadening of the backscattered radar signal and direct, in situ measurements by the high-frequency response velocity sensor on the UAV. The data also enabled other turbulence parameters such as the temperature structure function parameter {C}_T^2 and refractive index structure function parameter {C}_n^2 to be measured by sensors on the UAV, along with radar-inferred refractive index structure function parameter {C}_{n,radar}^2 . The comprehensive dataset collected during the campaign (from the radar, the UAV, the boundary layer lidar, the ceilometer, and radiosondes) is expected to help obtain a better understanding of turbulent atmospheric structures, as well as arrive at a better interpretation of the radar data.

  1. Tablet—next generation sequence assembly visualization

    PubMed Central

    Milne, Iain; Bayer, Micha; Cardle, Linda; Shaw, Paul; Stephen, Gordon; Wright, Frank; Marshall, David

    2010-01-01

    Summary: Tablet is a lightweight, high-performance graphical viewer for next-generation sequence assemblies and alignments. Supporting a range of input assembly formats, Tablet provides high-quality visualizations showing data in packed or stacked views, allowing instant access and navigation to any region of interest, and whole contig overviews and data summaries. Tablet is both multi-core aware and memory efficient, allowing it to handle assemblies containing millions of reads, even on a 32-bit desktop machine. Availability: Tablet is freely available for Microsoft Windows, Apple Mac OS X, Linux and Solaris. Fully bundled installers can be downloaded from http://bioinf.scri.ac.uk/tablet in 32- and 64-bit versions. Contact: tablet@scri.ac.uk PMID:19965881

  2. Architectural Heritage Documentation by Using Low Cost Uav with Fisheye Lens: Otag-I Humayun in Istanbul as a Case Study

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Özerdem, Ö. Z.

    2017-11-01

    The digital documentation of architectural heritage is important for monitoring, preserving, managing as well as 3B BIM modelling, time-space VR (virtual reality) applications. The unmanned aerial vehicles (UAVs) have been widely used in these application thanks to rapid developments in technology which enable the high resolution images with resolutions in millimeters. Moreover, it has become possible to produce highly accurate 3D point clouds with structure from motion (SfM) and multi-view stereo (MVS), to obtain a surface reconstruction of a realistic 3D architectural heritage model by using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan. In this study, digital documentation of Otag-i Humayun (The Ottoman Empire Sultan's Summer Palace) located in Davutpaşa, Istanbul/Turkey is aimed using low cost UAV. The data collections have been made with low cost UAS 3DR Solo UAV with GoPro Hero 4 with fisheye lens. The data processing was accomplished by using commercial Pix4D software. The dense point clouds, a true orthophoto and 3D solid model of the Otag-i Humayun were produced results. The quality check of the produced point clouds has been performed. The obtained result from Otag-i Humayun in Istanbul proved that, the low cost UAV with fisheye lens can be successfully used for architectural heritage documentation.

  3. Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus

    PubMed Central

    Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.

    2013-01-01

    Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713

  4. A biomimetic, energy-harvesting, obstacle-avoiding, path-planning algorithm for UAVs

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Snorri

    This dissertation presents two new approaches to energy harvesting for Unmanned Aerial Vehicles (UAV). One method is based on the Potential Flow Method (PFM); the other method seeds a wind-field map based on updraft peak analysis and then applies a variant of the Bellman-Ford algorithm to find the minimum-cost path. Both methods are enhanced by taking into account the performance characteristics of the aircraft using advanced performance theory. The combined approach yields five possible trajectories from which the one with the minimum energy cost is selected. The dissertation concludes by using the developed theory and modeling tools to simulate the flight paths of two small Unmanned Aerial Vehicles (sUAV) in the 500 kg and 250 kg class. The results show that, in mountainous regions, substantial energy can be recovered, depending on topography and wind characteristics. For the examples presented, as much as 50% of the energy was recovered for a complex, multi-heading, multi-altitude, 170 km mission in an average wind speed of 9 m/s. The algorithms constitute a Generic Intelligent Control Algorithm (GICA) for autonomous unmanned aerial vehicles that enables an extraction of atmospheric energy while completing a mission trajectory. At the same time, the algorithm. automatically adjusts the flight path in order to avoid obstacles, in a fashion not unlike what one would expect from living organisms, such as birds and insects. This multi-disciplinary approach renders the approach biomimetic, i.e. it constitutes a synthetic system that “mimics the formation and function of biological mechanisms and processes.”.

  5. Navigation, Guidance and Control For the CICADA Expendable Micro Air Vehicle

    DTIC Science & Technology

    2015-01-01

    aircraft, as shown in Figure 5a. A Tempest UAV mothership was used as the host platform for the CICADA vehicles. Figure 5b shows how two CICADAs were...mounted on wing pylon drop mechanisms located on each wing of the Tempest . The Tempest was needed to carry the CICADAs back within range of the recovery...carried the Tempest and CICADA combination to a maximum altitude of 57,000 feet above sea-level. At that point, Tempest was released from the balloon and

  6. Security Engineering Pilot

    DTIC Science & Technology

    2013-02-28

    needed to detect and isolate the compromised component • Prevent a cyber attack exploit from reading enough information to form a coherent data set...Analysis Signal Copy Selected Sub-Bands • Gimbaled, Stabilized EO/IR Camera Ball • High Precision GPS & INS (eventual swarm capable inter-UAV coherent ... LIDAR , HSI, Chem-Bio • Multi-Platform Distributed Sensor Experiments (eg, MIMO) • Autonomous & Collaborative Multi-Platform Control • Space for

  7. Position Accuracy Analysis of a Robust Vision-Based Navigation

    NASA Astrophysics Data System (ADS)

    Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.

    2018-05-01

    Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.

  8. Inertial aided cycle slip detection and identification for integrated PPP GPS and INS.

    PubMed

    Du, Shuang; Gao, Yang

    2012-10-25

    The recently developed integrated Precise Point Positioning (PPP) GPS/INS system can be useful to many applications, such as UAV navigation systems, land vehicle/machine automation and mobile mapping systems. Since carrier phase measurements are the primary observables in PPP GPS, cycle slips, which often occur due to high dynamics, signal obstructions and low satellite elevation, must be detected and repaired in order to ensure the navigation performance. In this research, a new algorithm of cycle slip detection and identification has been developed. With the aiding from INS, the proposed method jointly uses WL and EWL phase combinations to uniquely determine cycle slips in the L1 and L2 frequencies. To verify the efficiency of the algorithm, both tactical-grade and consumer-grade IMUs are tested by using a real dataset collected from two field tests. The results indicate that the proposed algorithm can efficiently detect and identify the cycle slips and subsequently improve the navigation performance of the integrated system.

  9. Towards a More Efficient Detection of Earthquake Induced FAÇADE Damages Using Oblique Uav Imagery

    NASA Astrophysics Data System (ADS)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2017-08-01

    Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for façade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient façade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the façades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight; ii) use of the latter as proxy for façade location in the oblique images captured in subsequent flights, and iii) selection of the façade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted façade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of L'Aquila, Italy, affected in 2009 by an earthquake.

  10. Autonomous UAV-Based Mapping of Large-Scale Urban Firefights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snarski, S; Scheibner, K F; Shaw, S

    2006-03-09

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urbanmore » firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with no false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type.« less

  11. A Visual-Cue-Dependent Memory Circuit for Place Navigation.

    PubMed

    Qin, Han; Fu, Ling; Hu, Bo; Liao, Xiang; Lu, Jian; He, Wenjing; Liang, Shanshan; Zhang, Kuan; Li, Ruijie; Yao, Jiwei; Yan, Junan; Chen, Hao; Jia, Hongbo; Zott, Benedikt; Konnerth, Arthur; Chen, Xiaowei

    2018-06-05

    The ability to remember and to navigate to safe places is necessary for survival. Place navigation is known to involve medial entorhinal cortex (MEC)-hippocampal connections. However, learning-dependent changes in neuronal activity in the distinct circuits remain unknown. Here, by using optic fiber photometry in freely behaving mice, we discovered the experience-dependent induction of a persistent-task-associated (PTA) activity. This PTA activity critically depends on learned visual cues and builds up selectively in the MEC layer II-dentate gyrus, but not in the MEC layer III-CA1 pathway, and its optogenetic suppression disrupts navigation to the target location. The findings suggest that the visual system, the MEC layer II, and the dentate gyrus are essential hubs of a memory circuit for visually guided navigation. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.

  13. The development of a white cane which navigates the visually impaired.

    PubMed

    Shiizu, Yuriko; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige

    2007-01-01

    In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines, RFID tags and an intelligent white cane. In our system, some colored marking tapes are set on along the walking route. These lines are called navigation line. And also RFID tags are set on this line at each landmark point. The intelligent white cane can sense a color of navigation line and receive tag information. By vibration of white cane, the system informs the visually impaired that he/she is walking along the navigation line. At the landmark point, the system also notifies area information to him/her by pre-recorded voice. Ten normal subjects who were blind folded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the area information system was good. Therefore, we have concluded that our system will be extremely valuable in supporting the activities of the visually impaired.

  14. Self-motivated visual scanning predicts flexible navigation in a virtual environment.

    PubMed

    Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C

    2014-01-01

    The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  15. Visual navigation using edge curve matching for pinpoint planetary landing

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  16. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  17. Vision for navigation: What can we learn from ants?

    PubMed

    Graham, Paul; Philippides, Andrew

    2017-09-01

    The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. 33 CFR 175.130 - Visual distress signals accepted.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...

  19. 33 CFR 175.130 - Visual distress signals accepted.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...

  20. 33 CFR 175.130 - Visual distress signals accepted.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...

  1. INS/GNSS Integration for Aerobatic Flight Applications and Aircraft Motion Surveying.

    PubMed

    V Hinüber, Edgar L; Reimer, Christian; Schneider, Tim; Stock, Michael

    2017-04-26

    This paper presents field tests of challenging flight applications obtained with a new family of lightweight low-power INS/GNSS ( inertial navigation system/global satellite navigation system ) solutions based on MEMS ( micro-electro-mechanical- sensor ) machined sensors, being used for UAV ( unmanned aerial vehicle ) navigation and control as well as for aircraft motion dynamics analysis and trajectory surveying. One key is a 42+ state extended Kalman-filter-based powerful data fusion, which also allows the estimation and correction of parameters that are typically affected by sensor aging, especially when applying MEMS-based inertial sensors, and which is not yet deeply considered in the literature. The paper presents the general system architecture, which allows iMAR Navigation the integration of all classes of inertial sensors and GNSS ( global navigation satellite system ) receivers from very-low-cost MEMS and high performance MEMS over FOG ( fiber optical gyro ) and RLG ( ring laser gyro ) up to HRG ( hemispherical resonator gyro ) technology, and presents detailed flight test results obtained under extreme flight conditions. As a real-world example, the aerobatic maneuvers of the World Champion 2016 (Red Bull Air Race) are presented. Short consideration is also given to surveying applications, where the ultimate performance of the same data fusion, but applied on gravimetric surveying, is discussed.

  2. INS/GNSS Integration for Aerobatic Flight Applications and Aircraft Motion Surveying

    PubMed Central

    v. Hinüber, Edgar L.; Reimer, Christian; Schneider, Tim; Stock, Michael

    2017-01-01

    This paper presents field tests of challenging flight applications obtained with a new family of lightweight low-power INS/GNSS (inertial navigation system/global satellite navigation system) solutions based on MEMS (micro-electro-mechanical- sensor) machined sensors, being used for UAV (unmanned aerial vehicle) navigation and control as well as for aircraft motion dynamics analysis and trajectory surveying. One key is a 42+ state extended Kalman-filter-based powerful data fusion, which also allows the estimation and correction of parameters that are typically affected by sensor aging, especially when applying MEMS-based inertial sensors, and which is not yet deeply considered in the literature. The paper presents the general system architecture, which allows iMAR Navigation the integration of all classes of inertial sensors and GNSS (global navigation satellite system) receivers from very-low-cost MEMS and high performance MEMS over FOG (fiber optical gyro) and RLG (ring laser gyro) up to HRG (hemispherical resonator gyro) technology, and presents detailed flight test results obtained under extreme flight conditions. As a real-world example, the aerobatic maneuvers of the World Champion 2016 (Red Bull Air Race) are presented. Short consideration is also given to surveying applications, where the ultimate performance of the same data fusion, but applied on gravimetric surveying, is discussed. PMID:28445417

  3. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).

  4. Morphological and structural changes at the Merapi lava dome monitored using Unmanned Aerial Vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Darmawan, H.; Walter, T. R.; Brotopuspito, K. S.; Subandriyo, S.; Nandaka, M. A.

    2017-12-01

    Six gas-driven explosions between 2012 and 2014 had changed the morphology and structures of the Merapi lava dome. The explosions mostly occurred during rainfall season and caused NW-SE elongated open fissures that dissected the lava dome. In this study, we conducted UAVs photogrammetry before and after the explosions to investigate the morphological and structural changes and to assess the quality of the UAV photogrammetry. The first UAV photogrammetry was conducted on 26 April 2012. After the explosions, we conducted Terrestrial Laser Scanning (TLS) survey on 18 September 2014 and repeated UAV photogrammetry on 6 October 2015. We applied Structure from Motion (SfM) algorithm to reconstruct 3D SfM point clouds and photomosaics of the 2012 and 2015 UAVs images. Topography changes has been analyzed by calculating height difference between the 2012 and 2015 SfM point clouds, while structural changes has been investigated by visual comparison between the 2012 and 2015 photo mosaics. Moreover, a quality assessment of the results of UAV photogrammetry has been done by comparing the 3D SfM point clouds to TLS dataset. Result shows that the 2012 and 2015 SfM point clouds have 0.19 and 0.57 m difference compared to the TLS point cloud. Furthermore, topography, and structural changes reveal that the 2012-14 explosions were controlled by pre-existing structures. The volume of the 2012-14 explosions is 26.400 ± 1320 m3 DRE. In addition, we find a structurally delineated unstable block at the southern front of the dome which potentially collapses in the future. We concluded that the 2012-14 explosions occurred due to interaction between magma intrusion and rain water and were facilitated by pre-existing structures. The unstable block potentially leads to a rock avalanche hazard. Furthermore, our drone photogrammetry results show very promising and therefore we recommend to use drone for topography mapping in lava dome building volcanoes.

  5. Multi-Criteria Analysis of Uavs Regulations in 6 Countries Using the Analytical Hierarchical Process and Expert Knowledge

    NASA Astrophysics Data System (ADS)

    Morales, A. C.; Paez, D.; Arango, C.

    2015-08-01

    To analyze the current situation of Colombian regulation, it is necessary to compare some specific aspects with the legislation used in other countries where the UAVs topic dates to many years ago. This study is focused on evaluating all the possibilities to make the Colombian regulation effective without closing opportunities of research and development growth, but still guarantee the privacy and intimacy rights of the population. Results from our study are currently being used in the development of the Colombian regulation and they are proven useful to instigate informative debates and identify areas where specific needs are to be address in Colombia.

  6. Neural-network-based navigation and control of unmanned aerial vehicles for detecting unintended emissions

    NASA Astrophysics Data System (ADS)

    Zargarzadeh, H.; Nodland, David; Thotla, V.; Jagannathan, S.; Agarwal, S.

    2012-06-01

    Unmanned Aerial Vehicles (UAVs) are versatile aircraft with many applications, including the potential for use to detect unintended electromagnetic emissions from electronic devices. A particular area of recent interest has been helicopter unmanned aerial vehicles. Because of the nature of these helicopters' dynamics, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via output feedback control for trajectory tracking of a helicopter UAV using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic, virtual, and dynamic controllers and an observer. Optimal tracking is accomplished with a single NN utilized for cost function approximation. The controller positions the helicopter, which is equipped with an antenna, such that the antenna can detect unintended emissions. The overall closed-loop system stability with the proposed controller is demonstrated by using Lyapunov analysis. Finally, results are provided to demonstrate the effectiveness of the proposed control design for positioning the helicopter for unintended emissions detection.

  7. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    NASA Astrophysics Data System (ADS)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  8. 33 CFR 175.110 - Visual distress signals required.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...

  9. 33 CFR 175.110 - Visual distress signals required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...

  10. 33 CFR 175.110 - Visual distress signals required.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...

  11. 33 CFR 175.110 - Visual distress signals required.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...

  12. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  13. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation.

    PubMed

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir

    2014-01-01

    Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.

  14. Multi-Objective and Multidisciplinary Design Optimisation (MDO) of UAV Systems using Hierarchical Asynchronous Parallel Evolutionary Algorithms

    DTIC Science & Technology

    2007-09-17

    been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave

  15. Challenges in Unmanned Aerial Vehicle Photogrammetry for Archaeological Mapping at High Elevations

    NASA Astrophysics Data System (ADS)

    Adams, J. A.; Wernke, S.

    2015-12-01

    Unmanned Aerial Vehicles (UAVs), especially multi-rotor vehicles, are becoming ubiquitous and their appeal for generating photogrammetry-based maps has grown. The options are many and costs have plummeted in last five years; however, many challenges persist with their deployment. We mapped the archaeological site Maw­chu Llacta, a settlement in the southern highlands of Peru (Figure 1). Mawchu Llacta is a planned colonial town built over a major Inka-era center in the high-elevation grasslands at ~4,000m asl. The "general resettlement of Indians" was a massive forced resettlement program, for which very little local-level documentation exists. Mawachu Llacta's excellently preserved architecture includes >500 buildings and hundreds of walls spread across ~13h posed significant mapping challenges. Many environmental factors impact UAV deployment. The air pressure at 4,100 m asl is dramatically lower than at sea level. The dry season diurnal temperature differentials can vary from 7°C to 22°C daily. High and hot conditions frequently occur from late morning to early afternoon. Reaching Mawchu Llacta requires hiking 4km with 400m of vertical gain over steep and rocky terrain. There is also no on-site power or secure storage. Thus, the UAV must be packable. FAA regulations govern US UAV deployments, but regulations were less stringent in Peru. However, ITAR exemptions and Peruvian customs requirements were required. The Peruvian government has established an importation and approval process that entails leaving the UAV at customs, while obtaining the necessary government approvals, both of which can be problematic. We have deployed the Aurora Flight Sciences Skate fixed wing ßUAV, an in-house fixed wing UAV based on the Skywalker X-5 flying wing, and a tethered 9 m3 capacity latex meteorological weather balloon. Development of an autonomous blimp/balloon has been ruled-out. A 3DR Solo is being assessed for excavation mapping.

  16. Precise Positioning of Uavs - Dealing with Challenging Rtk-Gps Measurement Conditions during Automated Uav Flights

    NASA Astrophysics Data System (ADS)

    Zimmermann, F.; Eling, C.; Klingbeil, L.; Kuhlmann, H.

    2017-08-01

    For some years now, UAVs (unmanned aerial vehicles) are commonly used for different mobile mapping applications, such as in the fields of surveying, mining or archeology. To improve the efficiency of these applications an automation of the flight as well as the processing of the collected data is currently aimed at. One precondition for an automated mapping with UAVs is that the georeferencing is performed directly with cm-accuracies or better. Usually, a cm-accurate direct positioning of UAVs is based on an onboard multi-sensor system, which consists of an RTK-capable (real-time kinematic) GPS (global positioning system) receiver and additional sensors (e.g. inertial sensors). In this case, the absolute positioning accuracy essentially depends on the local GPS measurement conditions. Especially during mobile mapping applications in urban areas, these conditions can be very challenging, due to a satellite shadowing, non-line-of sight receptions, signal diffraction or multipath effects. In this paper, two straightforward and easy to implement strategies will be described and analyzed, which improve the direct positioning accuracies for UAV-based mapping and surveying applications under challenging GPS measurement conditions. Based on a 3D model of the surrounding buildings and vegetation in the area of interest, a GPS geometry map is determined, which can be integrated in the flight planning process, to avoid GPS challenging environments as far as possible. If these challenging environments cannot be avoided, the GPS positioning solution is improved by using obstruction adaptive elevation masks, to mitigate systematic GPS errors in the RTK-GPS positioning. Simulations and results of field tests demonstrate the profit of both strategies.

  17. The Direct Georeferencing Application and Performance Analysis of Uav Helicopter in Gcp-Free Area

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Tsai, M. L.; Chiang, K. W.; Chu, C. H.; Tsai, G. J.; Cheng, C. K.; El-Sheimy, N.; Ayman, H.

    2015-08-01

    There are many disasters happened because the weather changes extremely in these years. To facilitate applications such as environment detection or monitoring becomes very important. Therefore, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG) based Unmanned Aerial Vehicle (UAV) helicopter photogrammetric platform where an Inertial Navigation System (INS)/Global Navigation Satellite System (GNSS) integrated Positioning and Orientation System (POS) system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP). The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 meter with 100 meter flight height. The positioning accuracy in the z axis is less than 10 meter. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP) generation, and feature point measurements, is less than 1 hour.

  18. Dhaksha, the Unmanned Aircraft System in its New Avatar-Automated Aerial Inspection of INDIA'S Tallest Tower

    NASA Astrophysics Data System (ADS)

    Kumar, K. S.; Rasheed, A. Mohamed; Krishna Kumar, R.; Giridharan, M.; Ganesh

    2013-08-01

    DHAKSHA, the unmanned aircraft system (UAS), developed after several years of research by Division of Avionics, Department of Aerospace Engineering, MIT Campus of Anna University has recently proved its capabilities during May 2012 Technology demonstration called UAVforge organised by Defence Research Project Agency, Department of Defence, USA. Team Dhaksha with its most stable design outperformed all the other contestants competing against some of the best engineers from prestigi ous institutions across the globe like Middlesex University from UK, NTU and NUS from Singapore, Tudelft Technical University, Netherlands and other UAV industry participants in the world's toughest UAV challenge. This has opened up an opportunity for Indian UAVs making a presence in the international scenario as well. In furtherance to the above effort at Fort Stewart military base at Georgia,USA, with suitable payloads, the Dhaksha team deployed the UAV in a religious temple festival during November 2012 at Thiruvannamalai District for Tamil Nadu Police to avail the instant aerial imagery services over the crowd of 10 lakhs pilgrims and also about the investigation of the structural strength of the India's tallest structure, the 300 m RCC tower during January 2013. The developed system consists of a custom-built Rotary Wing model with on-board navigation, guidance and control systems (NGC) and ground control station (GCS), for mission planning, remote access, manual overrides and imagery related computations. The mission is to fulfill the competition requirements by using an UAS capable of providing complete solution for the stated problem. In this work the effort to produce multirotor unmanned aerial systems (UAS) for civilian applications at the MIT, Avionics Laboratory is presented

  19. Lightweight Hyperspectral Mapping System and a Novel Photogrammetric Processing Chain for UAV-based Sensing

    NASA Astrophysics Data System (ADS)

    Suomalainen, Juha; Franke, Jappe; Anders, Niels; Iqbal, Shahzad; Wenting, Philip; Becker, Rolf; Kooistra, Lammert

    2014-05-01

    We have developed a lightweight Hyperspectral Mapping System (HYMSY) and a novel processing chain for UAV based mapping. The HYMSY consists of a custom pushbroom spectrometer (range 450-950nm, FWHM 9nm, ~20 lines/s, 328 pixels/line), a consumer camera (collecting 16MPix raw image every 2 seconds), a GPS-Inertia Navigation System (GPS-INS), and synchronization and data storage units. The weight of the system at take-off is 2.0kg allowing us to mount it on a relatively small octocopter. The novel processing chain exploits photogrammetry in the georectification process of the hyperspectral data. At first stage the photos are processed in a photogrammetric software producing a high-resolution RGB orthomosaic, a Digital Surface Model (DSM), and photogrammetric UAV/camera position and attitude at the moment of each photo. These photogrammetric camera positions are then used to enhance the internal accuracy of GPS-INS data. These enhanced GPS-INS data are then used to project the hyperspectral data over the photogrammetric DSM, producing a georectified end product. The presented photogrammetric processing chain allows fully automated georectification of hyperspectral data using a compact GPS-INS unit while still producingin UAV use higher georeferencing accuracy than would be possible using the traditional processing method. During 2013, we have operated HYMSY on 150+ octocopter flights at 60+ sites or days. On typical flight we have produced for a 2-10ha area: a RGB orthoimagemosaic at 1-5cm resolution, a DSM in 5-10cm resolution, and hyperspectral datacube at 10-50cm resolution. The targets have mostly consisted of vegetated targets including potatoes, wheat, sugar beets, onions, tulips, coral reefs, and heathlands,. In this poster we present the Hyperspectral Mapping System and the photogrammetric processing chain with some of our first mapping results.

  20. Unmanned aerial vehicles for surveying marine fauna: assessing detection probability.

    PubMed

    Hodgson, Amanda; Peel, David; Kelly, Natalie

    2017-06-01

    Aerial surveys are conducted for various fauna to assess abundance, distribution, and habitat use over large spatial scales. They are traditionally conducted using light aircraft with observers recording sightings in real time. Unmanned Aerial Vehicles (UAVs) offer an alternative with many potential advantages, including eliminating human risk. To be effective, this emerging platform needs to provide detection rates of animals comparable to traditional methods. UAVs can also acquire new types of information, and this new data requires a reevaluation of traditional analyses used in aerial surveys; including estimating the probability of detecting animals. We conducted 17 replicate UAV surveys of humpback whales (Megaptera novaeangliae) while simultaneously obtaining a 'census' of the population from land-based observations, to assess UAV detection probability. The ScanEagle UAV, carrying a digital SLR camera, continuously captured images (with 75% overlap) along transects covering the visual range of land-based observers. We also used ScanEagle to conduct focal follows of whale pods (n = 12, mean duration = 40 min), to assess a new method of estimating availability. A comparison of the whale detections from the UAV to the land-based census provided an estimated UAV detection probability of 0.33 (CV = 0.25; incorporating both availability and perception biases), which was not affected by environmental covariates (Beaufort sea state, glare, and cloud cover). According to our focal follows, the mean availability was 0.63 (CV = 0.37), with pods including mother/calf pairs having a higher availability (0.86, CV = 0.20) than those without (0.59, CV = 0.38). The follows also revealed (and provided a potential correction for) a downward bias in group size estimates from the UAV surveys, which resulted from asynchronous diving within whale pods, and a relatively short observation window of 9 s. We have shown that UAVs are an effective alternative to traditional methods, providing a detection probability that is within the range of previous studies for our target species. We also describe a method of assessing availability bias that represents spatial and temporal characteristics of a survey, from the same perspective as the survey platform, is benign, and provides additional data on animal behavior. © 2017 by the Ecological Society of America.

  1. Modelling multi-rotor UAVs swarm deployment using virtual pheromones

    PubMed Central

    Pujol, Mar; Rizo, Ramón; Rizo, Carlos

    2018-01-01

    In this work, a swarm behaviour for multi-rotor Unmanned Aerial Vehicles (UAVs) deployment will be presented. The main contribution of this behaviour is the use of a virtual device for quantitative sematectonic stigmergy providing more adaptable behaviours in complex environments. It is a fault tolerant highly robust behaviour that does not require prior information of the area to be covered, or to assume the existence of any kind of information signals (GPS, mobile communication networks …), taking into account the specific features of UAVs. This behaviour will be oriented towards emergency tasks. Their main goal will be to cover an area of the environment for later creating an ad-hoc communication network, that can be used to establish communications inside this zone. Although there are several papers on robotic deployment it is more difficult to find applications with UAV systems, mainly because of the existence of various problems that must be overcome including limitations in available sensory and on-board processing capabilities and low flight endurance. In addition, those behaviours designed for UAVs often have significant limitations on their ability to be used in real tasks, because they assume specific features, not easily applicable in a general way. Firstly, in this article the characteristics of the simulation environment will be presented. Secondly, a microscopic model for deployment and creation of ad-hoc networks, that implicitly includes stigmergy features, will be shown. Then, the overall swarm behaviour will be modeled, providing a macroscopic model of this behaviour. This model can accurately predict the number of agents needed to cover an area as well as the time required for the deployment process. An experimental analysis through simulation will be carried out in order to verify our models. In this analysis the influence of both the complexity of the environment and the stigmergy system will be discussed, given the data obtained in the simulation. In addition, the macroscopic and microscopic models will be compared verifying the number of predicted individuals for each state regarding the simulation. PMID:29370203

  2. Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Knox, Lenora A.

    The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.

  3. Minerva: An Integrated Geospatial/Temporal Toolset for Real-time Science Decision Making and Data Collection

    NASA Astrophysics Data System (ADS)

    Lees, D. S.; Cohen, T.; Deans, M. C.; Lim, D. S. S.; Marquez, J.; Heldmann, J. L.; Hoffman, J.; Norheim, J.; Vadhavk, N.

    2016-12-01

    Minerva integrates three capabilities that are critical to the success of NASA analogs. It combines NASA's Exploration Ground Data Systems (xGDS) and Playbook software, and MIT's Surface Exploration Traverse Analysis and Navigation Tool (SEXTANT). Together, they help to plan, optimize, and monitor traverses; schedule and track activity; assist with science decision-making and document sample and data collection. Pre-mission, Minerva supports planning with a priori map data (e.g., UAV and satellite imagery) and activity scheduling. During missions, xGDS records and broadcasts live data to a distributed team who take geolocated notes and catalogue samples. Playbook provides live schedule updates and multi-media chat. Post-mission, xGDS supports data search and visualization for replanning and analysis. NASA's BASALT (Biologic Analog Science Associated with Lava Terrains) and FINESSE (Field Investigations to Enable Solar System Science and Exploration) projects use Minerva to conduct field science under simulated Mars mission conditions including 5 and 15 minute one-way communication delays. During the recent BASALT-FINESSE mission, two field scientists (EVA team) executed traverses across volcanic terrain to characterize and sample basalts. They wore backpacks with communications and imaging capabilities, and carried field portable spectrometers. The Science Team was 40 km away in a simulated mission control center. The Science Team monitored imaging (video and still), spectral, voice, location and physiological data from the EVA team via the network from the field, under communication delays. Minerva provided the Science Team with a unified context of operations at the field site, so they could make meaningful remote contributions to the collection of 10's of geotagged samples. Minerva's mission architecture will be presented with technical details and capabilities. Through the development, testing and application of Minerva, we are defining requirements for the design of future capabilities to support human and human-robotic missions to deep space and Mars.

  4. Low-cost, quantitative assessment of highway bridges through the use of unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Ellenberg, Andrew; Kontsos, Antonios; Moon, Franklin; Bartoli, Ivan

    2016-04-01

    Many envision that in the near future the application of Unmanned Aerial Vehicles (UAVs) will impact the civil engineering industry. Use of UAVs is currently experiencing tremendous growth, primarily in military and homeland security applications. It is only a matter of time until UAVs will be widely accepted as platforms for implementing monitoring/surveillance and inspection in other fields. Most UAVs already have payloads as well as hardware/software capabilities to incorporate a number of non-contact remote sensors, such as high resolution cameras, multi-spectral imaging systems, and laser ranging systems (LIDARs). Of critical importance to realizing the potential of UAVs within the infrastructure realm is to establish how (and the extent to which) such information may be used to inform preservation and renewal decisions. Achieving this will depend both on our ability to quantify information from images (through, for example, optical metrology techniques) and to fuse data from the array of non-contact sensing systems. Through a series of applications to both laboratory-scale and field implementations on operating infrastructure, this paper will present and evaluate (through comparison with conventional approaches) various image processing and data fusion strategies tailored specifically for the assessment of highway bridges. Example scenarios that guided this study include the assessment of delaminations within reinforced concrete bridge decks, the quantification of the deterioration of steel coatings, assessment of the functionality of movement mechanisms, and the estimation of live load responses (inclusive of both strain and displacement).

  5. Efficient structure from motion for oblique UAV images based on maximal spanning tree expansion

    NASA Astrophysics Data System (ADS)

    Jiang, San; Jiang, Wanshou

    2017-10-01

    The primary contribution of this paper is an efficient Structure from Motion (SfM) solution for oblique unmanned aerial vehicle (UAV) images. First, an algorithm, considering spatial relationship constraints between image footprints, is designed for match pair selection with the assistance of UAV flight control data and oblique camera mounting angles. Second, a topological connection network (TCN), represented by an undirected weighted graph, is constructed from initial match pairs, which encodes the overlap areas and intersection angles into edge weights. Then, an algorithm, termed MST-Expansion, is proposed to extract the match graph from the TCN, where the TCN is first simplified by a maximum spanning tree (MST). By further analysis of the local structure in the MST, expansion operations are performed on the vertices of the MST for match graph enhancement, which is achieved by introducing critical connections in the expansion directions. Finally, guided by the match graph, an efficient SfM is proposed. Under extensive analysis and comparison, its performance is verified by using three oblique UAV datasets captured with different multi-camera systems. Experimental results demonstrate that the efficiency of image matching is improved, with speedup ratios ranging from 19 to 35, and competitive orientation accuracy is achieved from both relative bundle adjustment (BA) without GCPs (Ground Control Points) and absolute BA with GCPs. At the same time, images in the three datasets are successfully oriented. For the orientation of oblique UAV images, the proposed method can be a more efficient solution.

  6. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    NASA Astrophysics Data System (ADS)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  7. Satellite Imagery Assisted Road-Based Visual Navigation System

    NASA Astrophysics Data System (ADS)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  8. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    PubMed

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  9. Volcanic Plume Measurements with UAV (Invited)

    NASA Astrophysics Data System (ADS)

    Shinohara, H.; Kaneko, T.; Ohminato, T.

    2013-12-01

    Volatiles in magmas are the driving force of volcanic eruptions and quantification of volcanic gas flux and composition is important for the volcano monitoring. Recently we developed a portable gas sensor system (Multi-GAS) to quantify the volcanic gas composition by measuring volcanic plumes and obtained volcanic gas compositions of actively degassing volcanoes. As the Multi-GAS measures variation of volcanic gas component concentrations in the pumped air (volcanic plume), we need to bring the apparatus into the volcanic plume. Commonly the observer brings the apparatus to the summit crater by himself but such measurements are not possible under conditions of high risk of volcanic eruption or difficulty to approach the summit due to topography etc. In order to overcome these difficulties, volcanic plume measurements were performed by using manned and unmanned aerial vehicles. The volcanic plume measurements by manned aerial vehicles, however, are also not possible under high risk of eruption. The strict regulation against the modification of the aircraft, such as installing sampling pipes, also causes difficulty due to the high cost. Application of the UAVs for the volcanic plume measurements has a big advantage to avoid these problems. The Multi-GAS consists of IR-CO2 and H2O gas analyzer, SO2-H2O chemical sensors and H2 semiconductor sensor and the total weight ranges 3-6 kg including batteries. The necessary conditions of the UAV for the volcanic plumes measurements with the Multi-GAS are the payloads larger than 3 kg, maximum altitude larger than the plume height and installation of the sampling pipe without contamination of the exhaust gases, as the exhaust gases contain high concentrations of H2, SO2 and CO2. Up to now, three different types of UAVs were applied for the measurements; Kite-plane (Sky Remote) at Miyakejima operated by JMA, Unmanned airplane (Air Photo Service) at Shinomoedake, Kirishima volcano, and Unmanned helicopter (Yamaha) at Sakurajima volcano operated by ERI, Tokyo University. In all cases, we could estimated volcanic gas compositions, such as CO2/SO2 ratios, but also found out that it is necessary to improve the techniques to avoid the contamination of the exhaust gases and to approach more concentrated part of the plume. It was also revealed that the aerial measurements have an advantage of the stable background. The error of the volcanic gas composition estimates are largely due to the large fluctuation of the atmospheric H2O and CO2 concentrations near the ground. The stable atmospheric background obtained by the UAV measurements enables accurate estimate of the volcanic gas compositions. One of the most successful measurements was that on May 18, 2011 at Shinomoedake, Kirishima volcano during repeating Vulcanian eruption stage. The major component composition was obtained as H2O=97, CO2=1.5, SO2=0.2, H2S=0.24, H2=0.006 mol%; the high CO2 contents suggests relatively deep source of the magma degassing and the apparent equilibrium temperature obtained as 400°C indicates that the gas was cooled during ascent to the surface. The volcanic plume measurement with UAV will become an important tool for the volcano monitoring that provides important information to understand eruption processes.

  10. Applicability of Deep-Learning Technology for Relative Object-Based Navigation

    DTIC Science & Technology

    2017-09-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static

  11. Insect navigation: do ants live in the now?

    PubMed

    Graham, Paul; Mangan, Michael

    2015-03-01

    Visual navigation is a critical behaviour for many animals, and it has been particularly well studied in ants. Decades of ant navigation research have uncovered many ways in which efficient navigation can be implemented in small brains. For example, ants show us how visual information can drive navigation via procedural rather than map-like instructions. Two recent behavioural observations highlight interesting adaptive ways in which ants implement visual guidance. Firstly, it has been shown that the systematic nest searches of ants can be biased by recent experience of familiar scenes. Secondly, ants have been observed to show temporary periods of confusion when asked to repeat a route segment, even if that route segment is very familiar. Taken together, these results indicate that the navigational decisions of ants take into account their recent experiences as well as the currently perceived environment. © 2015. Published by The Company of Biologists Ltd.

  12. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  13. Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors

    PubMed Central

    Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C.; Ude, Aleš; Ollero, Aníbal

    2016-01-01

    Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results. PMID:27187413

  14. Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors.

    PubMed

    Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C; Ude, Aleš; Ollero, Aníbal

    2016-05-14

    Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object's shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object's centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.

  15. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  16. Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX

    DTIC Science & Technology

    2007-05-17

    including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication

  17. Urban Convoy Escort Utilizing a Swarm of UAV’s

    DTIC Science & Technology

    2009-04-05

    at USNA for future research projects. 108 12. Endnotes [1] J. Cheng, W. Cheng, Nagpal , “Robust and Self-repairing...principles from natural multi-agent systems”, Annals of Operations Research, 1997. J. Cheng, W. Cheng, Nagpal , “Robust and Self-repairing Formation Control

  18. [Fractional vegetation cover of invasive Spartina alterniflora in coastal wetland using unmanned aerial vehicle (UAV)remote sensing].

    PubMed

    Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing

    2016-12-01

    The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.

  19. Multi-temporal UAV-borne LiDAR point clouds for vegetation analysis - a case study

    NASA Astrophysics Data System (ADS)

    Mandlburger, Gottfried; Wieser, Martin; Hollaus, Markus; Pfennigbauer, Martin; Riegl, Ursula

    2016-04-01

    In the recent past the introduction of compact and lightweight LiDAR (Light Detection And Ranging) sensors together with progress in UAV (Unmanned Aerial Vehicle) technology allowed the integration of laser scanners on remotely piloted multicopter, helicopter-type and even fixed-wing platforms. The multi-target capabilities of state-of-the-art time-of-flight full-waveform laser sensors operated from low flying UAV-platforms has enabled capturing of the entire 3D structure of semi-transparent objects like deciduous forests under leaf-off conditions in unprecedented density and completeness. For such environments it has already been demonstrated that UAV-borne laser scanning combines the advantages of terrestrial laser scanning (high point density, short range) and airborne laser scanning (bird's eye perspective, homogeneous point distribution). Especially the oblique looking capabilities of scanners with a large field of view (>180°) enable capturing of vegetation from different sides resulting in a constantly high point density also in the sub canopy domain. Whereas the findings stated above were drawn based on a case study carried out in February 2015 with the Riegl VUX-1UAV laser scanner system mounted on a Riegl RiCopter octocopter UAV-platform over an alluvial forest at the Pielach River (Lower Austria), the site was captured a second time with the same sensor system and mission parameters at the end of the vegetation period on October 28th, 2015. The main goal of this experiment was to assess the impact of the late autumn foliage on the achievable 3D point density. Especially the entire understory vegetation and certain tree species (e.g. willow) were still in full leaf whereas the bigger trees (poplar) where already partly defoliated. The comparison revealed that, although both campaigns featured virtually the same laser shot count, the ground point density dropped from 517 points/m2 in February (leaf-off) to 267 points/m2 end of October (leaf-on). The decrease of ca. 50% is compensated by an increase in the upper canopy area (>20 m a.g.l.; Feb: 348 points/m2, Oct: 757 points/m2, increase rate: 118%). The greater leaf area in October results in more laser echoes from the canopy but the density decrease on the ground is not entirely attributed to shadowing from the upper canopy as the point distribution is nearly constant in the medium (10-20 m) and lower (0-10 m) sub-canopy area. The lower density on the ground is rather caused by a densely foliated shrub layer (0.15-3 m; Feb: 178 points/m2, Oct: 259 points/m2, increase rate: 46%). A sharp ground point density drop could be observed in areas covered by an invasive weed species (Fallopia japonica) which keeps its extremely dense foliage till late in the year. In summary, the preliminary point density study has shown the potential of UAV-borne, multi-temporal LiDAR for characterization of seasonal vegetation changes in deciduous environments. It is remarkable that even under leaf-on conditions a very high terrain point density is achievable. Except for the dense shrub layer, the case study has shown a similar 3D point distribution in the sub-canopy area for leaf-off and leaf-on data acquisition.

  20. Multi-Vehicle Cooperative Control Research at the NASA Armstrong Flight Research Center, 2000-2014

    NASA Technical Reports Server (NTRS)

    Hanson, Curt

    2014-01-01

    A brief introductory overview of multi-vehicle cooperative control research conducted at the NASA Armstrong Flight Research Center from 2000 - 2014. Both flight research projects and paper studies are included. Since 2000, AFRC has been almost continuously pursuing research in the areas of formation flight for drag reduction and automated cooperative trajectories. An overview of results is given, including flight experiments done on the FA-18 and with the C-17. Other multi-vehicle cooperative research is discussed, including small UAV swarming projects and automated aerial refueling.

  1. Application of Vehicle Dynamic Modeling in Uavs for Precise Determination of Exterior Orientation

    NASA Astrophysics Data System (ADS)

    Khaghani, M.; Skaloud, J.

    2016-06-01

    Advances in unmanned aerial vehicles (UAV) and especially micro aerial vehicle (MAV) technology together with increasing quality and decreasing price of imaging devices have resulted in growing use of MAVs in photogrammetry. The practicality of MAV mapping is seriously enhanced with the ability to determine parameters of exterior orientation (EO) with sufficient accuracy, in both absolute and relative senses (change of attitude between successive images). While differential carrier phase GNSS satisfies cm-level positioning accuracy, precise attitude determination is essential for both direct sensor orientation (DiSO) and integrated sensor orientation (ISO) in corridor mapping or in block configuration imaging over surfaces with low texture. Limited cost, size, and weight of MAVs represent limitations on quality of onboard navigation sensors and puts emphasis on exploiting full capacity of available resources. Typically short flying times (10-30 minutes) also limit the possibility of estimating and/or correcting factors such as sensor misalignment and poor attitude initialization of inertial navigation system (INS). This research aims at increasing the accuracy of attitude determination in both absolute and relative senses with no extra sensors onboard. In comparison to classical INS/GNSS setup, novel approach is presented here to integrated state estimation, in which vehicle dynamic model (VDM) is used as the main process model. Such system benefits from available information from autopilot and physical properties of the platform in enhancing performance of determination of trajectory and parameters of exterior orientation consequently. The navigation system employs a differential carrier phase GNSS receiver and a micro electro-mechanical system (MEMS) grade inertial measurement unit (IMU), together with MAV control input from autopilot. Monte-Carlo simulation has been performed on trajectories for typical corridor mapping and block imaging. Results reveal considerable reduction in attitude errors with respect to conventional INS/GNSS system, in both absolute and relative senses. This eventually translates into higher redundancy and accuracy for photogrammetry applications.

  2. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  3. a Micro-Uav with the Capability of Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Mabillard, R.; Skaloud, J.

    2013-08-01

    This paper presents the development of a low cost UAV (Unmanned Aerial Vehicle) with the capability of direct georeferencing. The advantage of such system lies in its high maneuverability, operation flexibility as well as capability to acquire image data without the need of establishing ground control points (GCPs). Moreover, the precise georeferencing offers an improvement in the final mapping accuracy when employing integrated sensor orientation. Such mode of operation limits the number and distribution of GCPs, which in turns save time in their signalization and surveying. Although the UAV systems feature high flexibility and capability of flying into areas that are inhospitable or inaccessible to humans, the lack of precision in positioning and attitude estimation on-board decrease the gained value of the captured imagery and limits their mode of operation to specific configurations and need of groundreference. Within a scope of this study we show the potential of present technologies in the field of position and orientation determination on a small UAV. The hardware implementation and especially the non-trivial synchronization of all components is clarified. Thanks to the implementation of a multi-frequency, low power GNSS receiver and its coupling with redundant MEMSIMU, we can attain the characteristic of a much larger systems flown on large carries while keeping the sensor size and weight suitable for MAV operations.

  4. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  5. Integrated Reconfigurable Aperture, Digital Beam Forming, and Software GPS Receiver for UAV Navigation

    DTIC Science & Technology

    2007-12-11

    Implemented both carrier and code phase tracking loop for performance evaluation of a minimum power beam forming algorithm and null steering algorithm...4 Antennal Antenna2 Antenna K RF RF RF ct, Ct~2 ChKx1 X2 ....... Xk A W ~ ~ =Z, x W ,=1 Fig. 5. Schematics of a K-element antenna array spatial...adaptive processor Antennal Antenna K A N-i V/ ( Vil= .i= VK Fig. 6. Schematics of a K-element antenna array space-time adaptive processor Two additional

  6. Multispectral data processing from unmanned aerial vehicles: application in precision agriculture using different sensors and platforms

    NASA Astrophysics Data System (ADS)

    Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert

    2017-04-01

    Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.

  7. A navigation system for the visually impaired an intelligent white cane.

    PubMed

    Fukasawa, A Jin; Magatani, Kazusihge

    2012-01-01

    In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane, this sensor senses a color of navigation line and the system informs the visually impaired that he/she is walking along the navigation line by vibration. This color recognition system is controlled by a one-chip microprocessor. RFID tags and a receiver for these tags are used in the map information system. RFID tags are set on the colored navigation line. An antenna for RFID tags and a tag receiver are also installed on a white cane. The receiver receives the area information as a tag-number and notifies map information to the user by mp3 formatted pre-recorded voice. And now, we developed the direction identification technique. Using this technique, we can detect a user's walking direction. A triaxiality acceleration sensor is used in this system. Three normal subjects who were blindfolded with an eye mask were tested with our developed navigation system. All of them were able to walk along the navigation line perfectly. We think that the performance of the system is good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.

  8. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    PubMed

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.

  9. High Resolution Stratigraphic Mapping in Complex Terrain: A Comparison of Traditional Remote Sensing Techniques with Unmanned Aerial Vehicle - Structure from Motion Photogrammetry

    NASA Astrophysics Data System (ADS)

    Nesbit, P. R.; Hugenholtz, C.; Durkin, P.; Hubbard, S. M.; Kucharczyk, M.; Barchyn, T.

    2016-12-01

    Remote sensing and digital mapping have started to revolutionize geologic mapping in recent years as a result of their realized potential to provide high resolution 3D models of outcrops to assist with interpretation, visualization, and obtaining accurate measurements of inaccessible areas. However, in stratigraphic mapping applications in complex terrain, it is difficult to acquire information with sufficient detail at a wide spatial coverage with conventional techniques. We demonstrate the potential of a UAV and Structure from Motion (SfM) photogrammetric approach for improving 3D stratigraphic mapping applications within a complex badland topography. Our case study is performed in Dinosaur Provincial Park (Alberta, Canada), mapping late Cretaceous fluvial meander belt deposits of the Dinosaur Park formation amidst a succession of steeply sloping hills and abundant drainages - creating a challenge for stratigraphic mapping. The UAV-SfM dataset (2 cm spatial resolution) is compared directly with a combined satellite and aerial LiDAR dataset (30 cm spatial resolution) to reveal advantages and limitations of each dataset before presenting a unique workflow that utilizes the dense point cloud from the UAV-SfM dataset for analysis. The UAV-SfM dense point cloud minimizes distortion, preserves 3D structure, and records an RGB attribute - adding potential value in future studies. The proposed UAV-SfM workflow allows for high spatial resolution remote sensing of stratigraphy in complex topographic environments. This extended capability can add value to field observations and has the potential to be integrated with subsurface petroleum models.

  10. Some technical notes on using UAV-based remote sensing for post disaster assessment

    NASA Astrophysics Data System (ADS)

    Rokhmana, Catur Aries; Andaru, Ruli

    2017-07-01

    Indonesia is located in an area prone to disasters, which are various kinds of natural disasters happen. In disaster management, the geoinformation data are needed to be able to evaluate the impact area. The UAV (Unmanned Aerial Vehicle)-Based remote sensing technology is a good choice to produce a high spatial resolution of less than 15 cm, while the current resolution of the satellite imagery is still greater than 50 cm. This paper shows some technical notes that should be considered when using UAV-Based remote sensing system in post disaster for rapid assessment. Some cases are Aceh Earthquake in years 2013 for seeing infrastructure damages, Banjarnegara landslide in year 2014 for seeing the impact; and Kelud volcano eruption in year 2014 for seeing the impact and volumetric material calculation. The UAV-Based remote sensing system should be able to produce the Orthophoto image that can provide capabilities for visual interpretation the individual damage objects, and the changes situation. Meanwhile the DEM (digital Elevation model) product can derive terrain topography, and volumetric calculation with accuracy 3-5 pixel or sub-meter also. The UAV platform should be able for working remotely and autonomously in dangerous area and limited infrastructures. In mountainous or volcano area, an unconventional flight plan should implemented. Unfortunately, not all impact can be seen from above such as wall crack, some parcel boundaries, and many objects that covered by others higher object. The previous existing geoinformation data are also needed to be able to evaluate the change detection automatically.

  11. Uncertainty management for aerial vehicles: Coordination, deconfliction, and disturbance rejection

    NASA Astrophysics Data System (ADS)

    Panyakeow, Prachya

    The presented dissertation aims to develop control algorithms that deal with three types of uncertainties managements. First, we examine the situation when unmanned aerial vehicles (UAVs) fly through uncertain environments that contain both stationary and moving obstacles. Moreover, a guarantee of collision avoidance is necessary when UAVs operate in close proximity of each other. Second, we look at the communication uncertainty among the network of cooperative UAVs and the efforts to establish and maintain the connectivity throughout their entire missions. Third, we explore the scenario when the aircraft flies through wind gust. The introduction of an appropriate control scheme to actively alleviate the gust loads can result into weight reduction and consequently lower the fuel cost. In the first part of this dissertation, we develop a deconfliction algorithm that guarantees collision avoidance between a pair of constant speed unicycle-type UAVs as well as convergence to the desired destination for each UAV in presence of static obstacles. We use a combination of navigation and swirling functions to direct the unicycle vehicles along the planned trajectories while avoiding inter-vehicle collisions. The main feature of our contribution is proposing means of designing a deconfliction algorithm for unicycle vehicles that more closely capture the dynamics of constant speed UAVs as opposed to double integrator models. Specifically, we consider the issue of UAV turn-rate constraints and proceed to explore the selection of key algorithmic parameters in order to minimize undesirable trajectories and overshoots induced by the avoidance algorithm. The avoidance and convergence analysis of the proposed algorithm is then performed for two cooperative UAVs and simulation results are provided to support the viability of the proposed framework for more general mission scenarios. For the uncertainty of the UAV network, we provides two approaches to establish connectivity among a collection of UAVs that are initially scattered in space. The goal is to find shortest trajectories that bring the UAVs to a connected formation where they are in the range of detection of one another and headed in the same direction to maintain the connectivity. Pontryagin Minimum Principle (PMP) is utilized to determine the control law and path synthesis for the UAVs under the turn-rate constraints. We introduce an algorithm to search for the optimal solution when the final network topology is specified; followed by a nonlinear programming method in which the final configuration is emerged from the optimization routine under the constraints that the final topology is connected. Each method has its own advantages based on the size of corporative networks. For the uncertainty due to gust turbulence, we choose a model predictive control (MPC) technique to address gust load alleviation (GLA) for a flexible aircraft. MPC is a discrete method based on repeated online optimization that allows direct consideration of control actuator constraints into the feedback computation. Gust alleviation systems are dependent on how the structural flexibility of the aircraft affects its dynamics. Hence, we develop a six-degree-of-freedom flexible aircraft model that can integrate rigid body dynamic with structural deflection. The structural stick-and-beam model is utilized for the calculation of aeroelastic mode shapes and airframe loads. Another important feature of MPC for GLA design is the ability to include the preview of gust information ahead of the aircraft nose into the prediction process. This helps raising the prediction accuracy and consequently improves the load alleviation performance. Finally, the aircraft is modified by the addition of the flap-array, a composition of small trailing edge flaps throughout the entire span of the wings. These flaps are used in conjunction with the distributed spoilers. With the availability of the control surfaces closer to the wing root, the MPC with flap-array can reduce the wing bending moment from different mode shapes and achieve better load alleviation performance than the original aircraft.

  12. [Impairment of safety in navigation caused by alcohol: impact on visual function].

    PubMed

    Grütters, G; Reichelt, J A; Ritz-Timme, S; Thome, M; Kaatsch, H J

    2003-05-01

    So far in Germany, no legally binding standards for blood alcohol concentration exist that prove an impairment of navigability. The aim of our interdisciplinary project was to obtain data in order to identify critical blood alcohol limits. In this context the visual system seems to be of decisive importance. 21 professional skippers underwent realistic navigational demands soberly and alcoholized in a sea traffic simulator. The following parameters were considered: visual acuity, stereopsis, color vision, and accommodation. Under the influence of alcohol (average blood alcohol concentration: 1.08 per thousand ) each skipper considered himself to be completely capable of navigating. While simulations were running, all of the skippers made nautical mistakes or underestimated dangerous situations. Severe impairment in visual acuity or binocular function were not observed. Accommodation decreased by an average of 18% ( p=0.0001). In the test of color vision skippers made more mistakes ( p=0.017) and the time needed for this test was prolonged ( p=0.004). Changes in visual function as well as vegetative and psychological reactions could be the cause of mistakes and alcohol should therefore be regarded as a severe risk factor for security in sea navigation.

  13. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    ERIC Educational Resources Information Center

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  14. Online optimal obstacle avoidance for rotary-wing autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Keeryun

    This thesis presents an integrated framework for online obstacle avoidance of rotary-wing unmanned aerial vehicles (UAVs), which can provide UAVs an obstacle field navigation capability in a partially or completely unknown obstacle-rich environment. The framework is composed of a LIDAR interface, a local obstacle grid generation, a receding horizon (RH) trajectory optimizer, a global shortest path search algorithm, and a climb rate limit detection logic. The key feature of the framework is the use of an optimization-based trajectory generation in which the obstacle avoidance problem is formulated as a nonlinear trajectory optimization problem with state and input constraints over the finite range of the sensor. This local trajectory optimization is combined with a global path search algorithm which provides a useful initial guess to the nonlinear optimization solver. Optimization is the natural process of finding the best trajectory that is dynamically feasible, safe within the vehicle's flight envelope, and collision-free at the same time. The optimal trajectory is continuously updated in real time by the numerical optimization solver, Nonlinear Trajectory Generation (NTG), which is a direct solver based on the spline approximation of trajectory for dynamically flat systems. In fact, the overall approach of this thesis to finding the optimal trajectory is similar to the model predictive control (MPC) or the receding horizon control (RHC), except that this thesis followed a two-layer design; thus, the optimal solution works as a guidance command to be followed by the controller of the vehicle. The framework is implemented in a real-time simulation environment, the Georgia Tech UAV Simulation Tool (GUST), and integrated in the onboard software of the rotary-wing UAV test-bed at Georgia Tech. Initially, the 2D vertical avoidance capability of real obstacles was tested in flight. The flight test evaluations were extended to the benchmark tests for 3D avoidance capability over the virtual obstacles, and finally it was demonstrated on real obstacles located at the McKenna MOUT site in Fort Benning, Georgia. Simulations and flight test evaluations demonstrate the feasibility of the developed framework for UAV applications involving low-altitude flight in an urban area.

  15. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  16. Remote Sensing Systems to Detect and Analyze Oil Spills on the U.S. Outer Continental Shelf - A State of the Art Assessment

    DTIC Science & Technology

    2016-08-18

    multi- sensor remote sensing approach to describe the distribution of oil from the DWH spill. They used airborne and satellite , multi- and hyperspectral...Experimental Sensors e.g., Acoustic and Nuclear Magnetic Resonance (NMR) (Fingas and Brown, 2012; Puestow et al., 2013). These are further...ship, aerial - aircraft, aerostat or UAV, or satellite ), among other classification criteria. A comprehensive review of sensor categories employed

  17. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  18. Multi-Target Tracking for Swarm vs. Swarm UAV Systems

    DTIC Science & Technology

    2012-09-01

    Uhlmann, “Using covariance intersection for SLAM,” Robotics and Autonomous Systems, vol. 55, pp. 3–20, Jan. 2007. [10] R. B. G. Wolfgang Niehsen... Krause , J. Leskovec, and C. Guestrin, “Data association for topic intensity track- ing,” Proceedings of the 23rd international conference on Machine

  19. Detection and Mapping of the Geomorphic Effects of Flooding Using UAV Photogrammetry

    NASA Astrophysics Data System (ADS)

    Langhammer, Jakub; Vacková, Tereza

    2018-04-01

    In this paper, we present a novel technique for the objective detection of the geomorphological effects of flooding in riverbeds and floodplains using imagery acquired by unmanned aerial vehicles (UAVs, also known as drones) equipped with an panchromatic camera. The proposed method is based on the fusion of the two key data products of UAV photogrammetry, the digital elevation model (DEM), and the orthoimage, as well as derived qualitative information, which together serve as the basis for object-based segmentation and the supervised classification of fluvial forms. The orthoimage is used to calculate textural features, enabling detection of the structural properties of the image area and supporting the differentiation of features with similar spectral responses but different surface structures. The DEM is used to derive a flood depth model and the terrain ruggedness index, supporting the detection of bank erosion. All the newly derived information layers are merged with the orthoimage to form a multi-band data set, which is used for object-based segmentation and the supervised classification of key fluvial forms resulting from flooding, i.e., fresh and old gravel accumulations, sand accumulations, and bank erosion. The method was tested on the effects of a snowmelt flood that occurred in December 2015 in a montane stream in the Sumava Mountains, Czech Republic, Central Europe. A multi-rotor UAV was used to collect images of a 1-km-long and 200-m-wide stretch of meandering stream with fresh traces of fluvial activity. The performed segmentation and classification proved that the fusion of 2D and 3D data with the derived qualitative layers significantly enhanced the reliability of the fluvial form detection process. The assessment accuracy for all of the detected classes exceeded 90%. The proposed technique proved its potential for application in rapid mapping and detection of the geomorphological effects of flooding.

  20. Autonomous UAV-based mapping of large-scale urban firefights

    NASA Astrophysics Data System (ADS)

    Snarski, Stephen; Scheibner, Karl; Shaw, Scott; Roberts, Randy; LaRow, Andy; Breitfeller, Eric; Lupo, Jasper; Nielson, Darron; Judge, Bill; Forren, Jim

    2006-05-01

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with very low false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type. The combined results of the high-intensity firefight data collect and a detailed systems study demonstrate the readiness of the FightSight concept for full system development and integration.

  1. Bioinspired polarization navigation sensor for autonomous munitions systems

    NASA Astrophysics Data System (ADS)

    Giakos, G. C.; Quang, T.; Farrahi, T.; Deshpande, A.; Narayan, C.; Shrestha, S.; Li, Y.; Agarwal, M.

    2013-05-01

    Small unmanned aerial vehicles UAVs (SUAVs), micro air vehicles (MAVs), Automated Target Recognition (ATR), and munitions guidance, require extreme operational agility and robustness which can be partially offset by efficient bioinspired imaging sensor designs capable to provide enhanced guidance, navigation and control capabilities (GNC). Bioinspired-based imaging technology can be proved useful either for long-distance surveillance of targets in a cluttered environment, or at close distances limited by space surroundings and obstructions. The purpose of this study is to explore the phenomenology of image formation by different insect eye architectures, which would directly benefit the areas of defense and security, on the following four distinct areas: a) fabrication of the bioinspired sensor b) optical architecture, c) topology, and d) artificial intelligence. The outcome of this study indicates that bioinspired imaging can impact the areas of defense and security significantly by dedicated designs fitting into different combat scenarios and applications.

  2. Towards distributed ATR using subjective logic combination rules with a swarm of UAVs

    NASA Astrophysics Data System (ADS)

    O'Hara, Stephen; Simon, Michael; Zhu, Qiuming

    2007-04-01

    In this paper, we present our initial findings demonstrating a cost-effective approach to Aided Target Recognition (ATR) employing a swarm of inexpensive Unmanned Aerial Vehicles (UAVs). We call our approach Distributed ATR (DATR). Our paper describes the utility of DATR for autonomous UAV operations, provides an overview of our methods, and the results of our initial simulation-based implementation and feasibility study. Our technology is aimed towards small and micro UAVs where platform restrictions allow only a modest quality camera and limited on-board computational capabilities. It is understood that an inexpensive sensor coupled with limited processing capability would be challenged in deriving a high probability of detection (P d) while maintaining a low probability of false alarms (P fa). Our hypothesis is that an evidential reasoning approach to fusing the observations of multiple UAVs observing approximately the same scene can raise the P d and lower the P fa sufficiently in order to provide a cost-effective ATR capability. This capability can lead to practical implementations of autonomous, coordinated, multi-UAV operations. In our system, the live video feed from a UAV is processed by a lightweight real-time ATR algorithm. This algorithm provides a set of possible classifications for each detected object over a possibility space defined by a set of exemplars. The classifications for each frame within a short observation interval (a few seconds) are used to generate a belief statement. Our system considers how many frames in the observation interval support each potential classification. A definable function transforms the observational data into a belief value. The belief value, or opinion, represents the UAV's belief that an object of the particular class exists in the area covered during the observation interval. The opinion is submitted as evidence in an evidential reasoning system. Opinions from observations over the same spatial area will have similar index values in the evidence cache. The evidential reasoning system combines observations of similar spatial indexes, discounting older observations based upon a parameterized information aging function. We employ Subjective Logic operations in the discounting and combination of opinions. The result is the consensus opinion from all observations that an object of a given class exists in a given region.

  3. Can multi-slice or navigator-gated R2* MRI replace single-slice breath-hold acquisition for hepatic iron quantification?

    PubMed

    Loeffler, Ralf B; McCarville, M Beth; Wagstaff, Anne W; Smeltzer, Matthew P; Krafft, Axel J; Song, Ruitian; Hankins, Jane S; Hillenbrand, Claudia M

    2017-01-01

    Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2* methods can be interchangeably used in existing R2*-HIC calibrations.

  4. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    NASA Technical Reports Server (NTRS)

    Gentry, Diana; Guarro, Marcello; Demachkie, Isabella Siham; Stumfall, Isabel; Dahlgren, Robert P.

    2017-01-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis.Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning.The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls.An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab for a variety of analyses. Based on a review of existing flight studies, we have identified ion chromatography, metagenomic sequencing, cell staining and quantification, and ATP quantification as high-priority assays for implementation. Support for specific toxicology assays, such as methylmercury quantification, is also planned.

  5. Unmanned Aerial Vehicle (UAV) associated DTM quality evaluation and hazard assessment

    NASA Astrophysics Data System (ADS)

    Huang, Mei-Jen; Chen, Shao-Der; Chao, Yu-Jui; Chiang, Yi-Lin; Chang, Kuo-Jen

    2014-05-01

    Taiwan, due to the high seismicity and high annual rainfall, numerous landslides triggered every year and severe impacts affect the island. Concerning to the catastrophic landslides, the key information of landslide, including range of landslide, volume estimation and the subsequent evolution are important when analyzing the triggering mechanism, hazard assessment and mitigation. Thus, the morphological analysis gives a general overview for the landslides and been considered as one of the most fundamental information. We try to integrate several technologies, especially by Unmanned Aerial Vehicle (UAV) and multi-spectral camera, to decipher the consequence and the potential hazard, and the social impact. In recent years, the remote sensing technology improves rapidly, providing a wide range of image, essential and precious information. Benefited of the advancing of informatics, remote-sensing and electric technologies, the Unmanned Aerial Vehicle (UAV) photogrammetry mas been improve significantly. The study tries to integrate several methods, including, 1) Remote-sensing images gathered by Unmanned Aerial Vehicle (UAV) and by aerial photos taken in different periods; 2) field in-situ geologic investigation; 3) Differential GPS, RTK GPS and Ground LiDAR field in-site geoinfomatics measurements; 4) Construct the DTMs before and after landslide, as well as the subsequent periods using UAV and aerial photos; 5) Discrete element method should be applied to understand the geomaterial composing the slope failure, for predicting earthquake-induced and rainfall-induced landslides displacement. First at all, we evaluate the Microdrones MD4-1000 UAV airphotos derived Digital Terrain Model (DTM). The ground resolution of the DSM point cloud of could be as high as 10 cm. By integrated 4 ground control point within an area of 56 hectares, compared with LiDAR DSM and filed RTK-GPS surveying, the mean error is as low as 6cm with a standard deviation of 17cm. The quality of the UAV DSM could be as good as LiDAR data, and is ready for other applications. The quality of the data set provides not only geoinfomatics and GIS dataset of the hazards, but also for essential geomorphologic information for other study, and for hazard mitigation and planning, as well.

  6. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    NASA Astrophysics Data System (ADS)

    Gentry, D.; Guarro, M.; Demachkie, I. S.; Stumfall, I.; Dahlgren, R. P.

    2016-12-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis. Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning. The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls. An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab for a variety of analyses. Based on a review of existing flight studies, we have identified ion chromatography, metagenomic sequencing, cell staining and quantification, and ATP quantification as high-priority assays for implementation. Support for specific toxicology assays, such as methylmercury quantification, is also planned.

  7. MultiMap: A Tool to Automatically Extract and Analyse Spatial Microscopic Data From Large Stacks of Confocal Microscopy Images

    PubMed Central

    Varando, Gherardo; Benavides-Piccione, Ruth; Muñoz, Alberto; Kastanauskaite, Asta; Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier

    2018-01-01

    The development of 3D visualization and reconstruction methods to analyse microscopic structures at different levels of resolutions is of great importance to define brain microorganization and connectivity. MultiMap is a new tool that allows the visualization, 3D segmentation and quantification of fluorescent structures selectively in the neuropil from large stacks of confocal microscopy images. The major contribution of this tool is the posibility to easily navigate and create regions of interest of any shape and size within a large brain area that will be automatically 3D segmented and quantified to determine the density of puncta in the neuropil. As a proof of concept, we focused on the analysis of glutamatergic and GABAergic presynaptic axon terminals in the mouse hippocampal region to demonstrate its use as a tool to provide putative excitatory and inhibitory synaptic maps. The segmentation and quantification method has been validated over expert labeled images of the mouse hippocampus and over two benchmark datasets, obtaining comparable results to the expert detections. PMID:29875639

  8. MultiMap: A Tool to Automatically Extract and Analyse Spatial Microscopic Data From Large Stacks of Confocal Microscopy Images.

    PubMed

    Varando, Gherardo; Benavides-Piccione, Ruth; Muñoz, Alberto; Kastanauskaite, Asta; Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier

    2018-01-01

    The development of 3D visualization and reconstruction methods to analyse microscopic structures at different levels of resolutions is of great importance to define brain microorganization and connectivity. MultiMap is a new tool that allows the visualization, 3D segmentation and quantification of fluorescent structures selectively in the neuropil from large stacks of confocal microscopy images. The major contribution of this tool is the posibility to easily navigate and create regions of interest of any shape and size within a large brain area that will be automatically 3D segmented and quantified to determine the density of puncta in the neuropil. As a proof of concept, we focused on the analysis of glutamatergic and GABAergic presynaptic axon terminals in the mouse hippocampal region to demonstrate its use as a tool to provide putative excitatory and inhibitory synaptic maps. The segmentation and quantification method has been validated over expert labeled images of the mouse hippocampus and over two benchmark datasets, obtaining comparable results to the expert detections.

  9. Direct Georeferencing of Uav Data Based on Simple Building Structures

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2016-06-01

    Unmanned Aerial Vehicle (UAV) data acquisition is more flexible compared with the more complex traditional airborne data acquisition. This advantage puts UAV platforms in a position as an alternative acquisition method in many applications including Large Scale Topographical Mapping (LSTM). LSTM, i.e. larger or equal than 1:10.000 map scale, is one of a number of prominent priority tasks to be solved in an accelerated way especially in third world developing countries such as Indonesia. As one component of fundamental geospatial data sets, large scale topographical maps are mandatory in order to enable detailed spatial planning. However, the accuracy of the products derived from the UAV data are normally not sufficient for LSTM as it needs robust georeferencing, which requires additional costly efforts such as the incorporation of sophisticated GPS Inertial Navigation System (INS) or Inertial Measurement Unit (IMU) on the platform and/or Ground Control Point (GCP) data on the ground. To reduce the costs and the weight on the UAV alternative solutions have to be found. This paper outlines a direct georeferencing method of UAV data by providing image orientation parameters derived from simple building structures and presents results of an investigation on the achievable results in a LSTM application. In this case, the image orientation determination has been performed through sequential images without any input from INS/IMU equipment. The simple building structures play a significant role in such a way that geometrical characteristics have been considered. Some instances are the orthogonality of the building's wall/rooftop and the local knowledge of the building orientation in the field. In addition, we want to include the Structure from Motion (SfM) approach in order to reduce the number of required GCPs especially for the absolute orientation purpose. The SfM technique applied to the UAV data and simple building structures additionally presents an effective tool for the LSTM application at low cost. Our results show that image orientation calculations from building structure essentially improve the accuracy of direct georeferencing procedure adjusted also by the GCPs. To gain three dimensional (3D) point clouds in local coordinate system, an extraction procedure has been performed by using Agisoft Photo Scan. Subsequently, a Digital Surface Model (DSM) generated from the acquired data is the main output for LSTM that has to be assessed using standard field and conventional mapping workflows. For an appraisal, our DSM is compared directly with a similar DSM obtained by conventional airborne data acquisition using Leica RCD-30 metric camera as well as Trimble Phase One (P65+) camera. The comparison reveals that our approach can achieve meter level accuracy both in planimetric and vertical dimensions.

  10. Short-Term Memory Maintenance of Object Locations during Active Navigation: Which Working Memory Subsystem Is Essential?

    PubMed Central

    Baumann, Oliver; Skilleter, Ashley J.; Mattingley, Jason B.

    2011-01-01

    The goal of the present study was to examine the extent to which working memory supports the maintenance of object locations during active spatial navigation. Participants were required to navigate a virtual environment and to encode the location of a target object. In the subsequent maintenance period they performed one of three secondary tasks that were designed to selectively load visual, verbal or spatial working memory subsystems. Thereafter participants re-entered the environment and navigated back to the remembered location of the target. We found that while navigation performance in participants with high navigational ability was impaired only by the spatial secondary task, navigation performance in participants with poor navigational ability was impaired equally by spatial and verbal secondary tasks. The visual secondary task had no effect on navigation performance. Our results extend current knowledge by showing that the differential engagement of working memory subsystems is determined by navigational ability. PMID:21629686

  11. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  12. UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.

    2016-01-01

    Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.

  13. Development of a bio-inspired UAV perching system

    NASA Astrophysics Data System (ADS)

    Xie, Pu

    Although technologies of unmanned aerial vehicles (UAVs) including micro air vehicles (MAVs) have been greatly advanced in the recent years, it is still very difficult for a UAV to perform some very challenging tasks such as perching to any desired spot reliably and agilely like a bird. Unlike the UAVs, the biological control mechanism of birds has been optimized through millions of year evolution and hence, they can perform many extremely maneuverability tasks, such as perching or grasping accurately and robustly. Therefore, we have good reason to learn from the nature in order to significantly improve the capabilities of UAVs. The development of a UAV perching system is becoming feasible, especially after a lot of research contributions in ornithology which involve the analysis of the bird's functionalities. Meanwhile, as technology advances in many engineering fields, such as airframes, propulsion, sensors, batteries, micro-electromechanical-system (MEMS), and UAV technology is also advancing rapidly. All of these research efforts in ornithology and the fast growing development technologies in UAV applications are motivating further interests and development in the area of UAV perching and grasping research. During the last decade, the research contributions about UAV perching and grasping were mainly based on fixed-wing, flapping-wing, and rotorcraft UAVs. However, most of the current researches in UAV systems with perching and grasping capability are focusing on either active (powered) grasping and perching or passive (unpowered) perching. Although birds do have both active and passive perching capabilities depending on their needs, there is no UAV perching system with both capabilities. In this project, we focused on filling this gap. Inspired by the anatomy analysis of bird legs and feet, a novel perching system has been developed to implement the bionics action for both active grasping and passive perching. In addition, for developing a robust and autonomous perching system, the following objectives were included for this project. The statics model was derived through both quasi-static and analytical method. The grasping stable condition and grasping target of the mechanical gripper were studied through the static analysis. Furthermore, the contact behavior between each foot and the perched object was modeled and evaluated on SimMechanics based on the contact force model derived through virtual principle. The kinematics modeling of UAV perching system was governed with Euler angles and quaternions. Also the propulsion model of the brushless motors was introduced and calibrated. In addition, the flight dynamics model of the UAV system was developed for simulation-based analysis prior to developing a hardware prototype and flight experiment. A special inertial measurement unit (IMU) was designed which has the capability of indirectly calculating the angular acceleration from the angular velocity and the linear acceleration readings. Moreover, a commercial-of-the-shelf (COTS) autopilot-APM 2.6 was selected for the autonomous flight control of the quadrotor. The APM 2.6 is a complete open source autopilot system, which allows the user to turn any fixed, rotary wing or multi-rotor vehicle into a fully autonomous vehicle and capable of performing programmed GPS missions with pre-programed waypoints. In addition, algorithms for inverted pendulum control and autonomous perching control was introduced. The proportion-integrate-differential (PID) controller was used for the simplified UAV perching with inverted pendulum model for horizontal balance. The performance of the controller was verified through both simulation and experiment. In addition, for the purpose of achieving the autonomous perching, guidance and control algorithms were developed the UAV perching system. For guidance, the desired flight trajectory was developed based on a bio-behavioral tau theory which was established from studying the natural motion patterns of animals and human arms approaching to a fixed or moving target for grasping or capturing. The autonomous flight control was also implemented through a PID controller. Autonomous flight performance was proved through simulation in SimMechanics. Finally, the prototyping of our designs were conducted in different generations of our bio-inspired UAV perching system, which include the leg prototype, gripper prototype, and system prototype. Both the machined prototype and 3D printed prototype were tried. The performance of these prototypes was tested through experiments.

  14. Monitoring height and greenness of non-woody floodplain vegetation with UAV time series

    NASA Astrophysics Data System (ADS)

    van Iersel, Wimala; Straatsma, Menno; Addink, Elisabeth; Middelkoop, Hans

    2018-07-01

    Vegetation in river floodplains has important functions for biodiversity, but can also have a negative influence on flood safety. Floodplain vegetation is becoming increasingly heterogeneous in space and time as a result of river restoration projects. To document the spatio-temporal patterns of the floodplain vegetation, the need arises for efficient monitoring techniques. Monitoring is commonly performed by mapping floodplains based on single-epoch remote sensing data, thereby not considering seasonal dynamics of vegetation. The rising availability of unmanned airborne vehicles (UAV) increases monitoring frequency potential. Therefore, we aimed to evaluate the performance of multi-temporal high-spatial-resolution imagery, collected with a UAV, to record the dynamics in floodplain vegetation height and greenness over a growing season. Since the classification accuracy of current airborne surveys remains insufficient for low vegetation types, we focussed on seasonal variation of herbaceous and grassy vegetation with a height up to 3 m. Field reference data on vegetation height were collected six times during one year in 28 field plots within a single floodplain along the Waal River, the main distributary of the Rhine River in the Netherlands. Simultaneously with each field survey, we recorded UAV true-colour and false-colour imagery from which normalized digital surface models (nDSMs) and a consumer-grade camera vegetation index (CGCVI) were calculated. We observed that: (1) the accuracy of a UAV-derived digital terrain model (DTM) varies over the growing season and is most accurate during winter when the vegetation is dormant, (2) vegetation height can be determined from the nDSMs in leaf-on conditions via linear regression (RSME = 0.17-0.33 m), (3) the multitemporal nDSMs yielded meaningful temporal profiles of greenness and vegetation height and (4) herbaceous vegetation shows hysteresis for greenness and vegetation height, but no clear hysteresis was observed for grassland vegetation. These results show the high potential of using UAV-borne sensors for increasing the classification accuracy of low floodplain vegetation within the framework of floodplain monitoring.

  15. UAV-Based Hyperspectral Remote Sensing for Precision Agriculture: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Angel, Y.; Parkes, S. D.; Turner, D.; Houborg, R.; Lucieer, A.; McCabe, M.

    2017-12-01

    Modern agricultural production relies on monitoring crop status by observing and measuring variables such as soil condition, plant health, fertilizer and pesticide effect, irrigation and crop yield. Managing all of these factors is a considerable challenge for crop producers. As such, providing integrated technological solutions that enable improved diagnostics of field condition to maximize profits, while minimizing environmental impacts, would be of much interest. Such challenges can be addressed by implementing remote sensing systems such as hyperspectral imaging to produce precise biophysical indicator maps across the various cycles of crop development. Recent progress in unmanned aerial vehicles (UAVs) have advanced traditional satellite-based capabilities, providing a capacity for high-spatial, spectral and temporal response. However, while some hyperspectral sensors have been developed for use onboard UAVs, significant investment is required to develop a system and data processing workflow that retrieves accurately georeferenced mosaics. Here we explore the use of a pushbroom hyperspectral camera that is integrated on-board a multi-rotor UAV system to measure the surface reflectance in 272 distinct spectral bands across a wavelengths range spanning 400-1000 nm, and outline the requirement for sensor calibration, integration onto a stable UAV platform enabling accurate positional data, flight planning, and development of data post-processing workflows for georeferenced mosaics. The provision of high-quality and geo-corrected imagery facilitates the development of metrics of vegetation health that can be used to identify potential problems such as production inefficiencies, diseases and nutrient deficiencies and other data-streams to enable improved crop management. Immense opportunities remain to be exploited in the implementation of UAV-based hyperspectral sensing (and its combination with other imaging systems) to provide a transferable and scalable integrated framework for crop growth monitoring and yield prediction. Here we explore some of the challenges and issues in translating the available technological capacity into a useful and useable image collection and processing flow-path that enables these potential applications to be better realized.

  16. Long-term monitoring of a large landslide by using an Unmanned Aerial Vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    Lindner, Gerald; Schraml, Klaus; Mansberger, Reinfried; Hübl, Johannes

    2015-04-01

    Currently UAVs become more and more important in various scientific areas, including forestry, precision farming, archaeology and hydrology. Using these drones in natural hazards research enables a completely new level of data acquisition being flexible of site, invariant in time, cost-efficient and enabling arbitrary spatial resolution. In this study, a rotary-wing Mini-UAV carrying a DSLR camera was used to acquire time series of overlapping aerial images. These photographs were taken as input to extract Digital Surface Models (DSM) as well as orthophotos in the area of interest. The "Pechgraben" area in Upper Austria has a catchment area of approximately 2 km². Geology is mainly dominated by limestone and sandstone. Caused by heavy rainfalls in the late spring of 2013, an area of about 70 ha began to move towards the village in the valley. In addition to the urgent measures, the slow-moving landslide was monitored approximately every month over a time period of more than 18 months. A detailed documentation of the change process was the result. Moving velocities and height differences were quantified and validated using a dense network of Ground Control Points (GCP). For further analysis, 14 image flights with a total amount of 10.000 photographs were performed to create multi-temporal geodata in in sub-decimeter-resolution for two depicted areas of the landslide. Using a UAV for this application proved to be an excellent choice, as it allows short repetition times, low flying heights and high spatial resolution. Furthermore, the UAV acts almost weather independently as well as highly autonomously. High-quality results can be expected within a few hours after the photo flight. The UAV system performs very well in an alpine environment. Time series of the assessed geodata detect changes in topography and provide a long-term documentation of the measures taken in order to stop the landslide and to prevent infrastructure from damage.

  17. An evaluation of a UAV guidance system with consumer grade GPS receivers

    NASA Astrophysics Data System (ADS)

    Rosenberg, Abigail Stella

    Remote sensing has been demonstrated an important tool in agricultural and natural resource management and research applications, however there are limitations that exist with traditional platforms (i.e., hand held sensors, linear moves, vehicle mounted, airplanes, remotely piloted vehicles (RPVs), unmanned aerial vehicles (UAVs) and satellites). Rapid technological advances in electronics, computers, software applications, and the aerospace industry have dramatically reduced the cost and increased the availability of remote sensing technologies. Remote sensing imagery vary in spectral, spatial, and temporal resolutions and are available from numerous providers. Appendix A presented results of a test project that acquired high-resolution aerial photography with a RPV to map the boundary of a 0.42 km2 fire area. The project mapped the boundaries of the fire area from a mosaic of the aerial images collected and compared this with ground-based measurements. The project achieved a 92.4% correlation between the aerial assessment and the ground truth data. Appendix B used multi-objective analysis to quantitatively assess the tradeoffs between different sensor platform attributes to identify the best overall technology. Experts were surveyed to identify the best overall technology at three different pixel sizes. Appendix C evaluated the positional accuracy of a relatively low cost UAV designed for high resolution remote sensing of small areas in order to determine the positional accuracy of sensor readings. The study evaluated the accuracy and uncertainty of a UAV flight route with respect to the programmed waypoints and of the UAV's GPS position, respectively. In addition, the potential displacement of sensor data was evaluated based on (1) GPS measurements on board the aircraft and (2) the autopilot's circuit board with 3-axis gyros and accelerometers (i.e., roll, pitch, and yaw). The accuracies were estimated based on a 95% confidence interval or similar methods. The accuracy achieved in the second and third manuscripts demonstrates that reasonably priced, high resolution remote sensing via RPVs and UAVs is practical for agriculture and natural resource professionals.

  18. [Comparison of precision in retrieving soybean leaf area index based on multi-source remote sensing data].

    PubMed

    Gao, Lin; Li, Chang-chun; Wang, Bao-shan; Yang Gui-jun; Wang, Lei; Fu, Kui

    2016-01-01

    With the innovation of remote sensing technology, remote sensing data sources are more and more abundant. The main aim of this study was to analyze retrieval accuracy of soybean leaf area index (LAI) based on multi-source remote sensing data including ground hyperspectral, unmanned aerial vehicle (UAV) multispectral and the Gaofen-1 (GF-1) WFV data. Ratio vegetation index (RVI), normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), difference vegetation index (DVI), and triangle vegetation index (TVI) were used to establish LAI retrieval models, respectively. The models with the highest calibration accuracy were used in the validation. The capability of these three kinds of remote sensing data for LAI retrieval was assessed according to the estimation accuracy of models. The experimental results showed that the models based on the ground hyperspectral and UAV multispectral data got better estimation accuracy (R² was more than 0.69 and RMSE was less than 0.4 at 0.01 significance level), compared with the model based on WFV data. The RVI logarithmic model based on ground hyperspectral data was little superior to the NDVI linear model based on UAV multispectral data (The difference in E(A), R² and RMSE were 0.3%, 0.04 and 0.006, respectively). The models based on WFV data got the lowest estimation accuracy with R2 less than 0.30 and RMSE more than 0.70. The effects of sensor spectral response characteristics, sensor geometric location and spatial resolution on the soybean LAI retrieval were discussed. The results demonstrated that ground hyperspectral data were advantageous but not prominent over traditional multispectral data in soybean LAI retrieval. WFV imagery with 16 m spatial resolution could not meet the requirements of crop growth monitoring at field scale. Under the condition of ensuring the high precision in retrieving soybean LAI and working efficiently, the approach to acquiring agricultural information by UAV remote sensing could yet be regarded as an optimal plan. Therefore, in the case of more and more available remote sensing information sources, agricultural UAV remote sensing could become an important information resource for guiding field-scale crop management and provide more scientific and accurate information for precision agriculture research.

  19. The answer is blowing in the wind: free-flying honeybees can integrate visual and mechano-sensory inputs for making complex foraging decisions.

    PubMed

    Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G

    2016-11-01

    Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.

  20. LUNA: low-flying UAV-based forest monitoring system

    NASA Astrophysics Data System (ADS)

    Keizer, Jan Jacob; Pereira, Luísa; Pinto, Glória; Alves, Artur; Barros, Antonio; Boogert, Frans-Joost; Cambra, Sílvia; de Jesus, Cláudia; Frankenbach, Silja; Mesquita, Raquel; Serôdio, João; Martins, José; Almendra, Ricardo

    2015-04-01

    The LUNA project is aiming to develop an information system for precision forestry and, in particular, the monitoring of eucalypt plantations that is first and foremost based on multi-spectral imagery acquired using low-flying uav's. The presentation will focus on the first phase of image acquisition, processing and analysis for a series of pot experiments addressing main threats for early-stage eucalypt plantations in Portugal, i.e. acute , chronic and cyclic hydric stress, nutrient stress, fungal infections and insect plague attacks. The imaging results will be compared with spectroscopic measurements as well as with eco-physiological and plant morphological measurements. Furthermore, the presentation will show initial results of the project's second phase, comprising field tests in existing eucalypt plantations in north-central Portugal.

  1. Soldier-Robot Team Communication: An Investigation of Exogenous Orienting Visual Display Cues and Robot Reporting Preferences

    DTIC Science & Technology

    2018-02-12

    usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4

  2. Computation and visualization of uncertainty in surgical navigation.

    PubMed

    Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A

    2014-09-01

    Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.

  3. A Collaborative Decision Environment for UAV Operations

    NASA Technical Reports Server (NTRS)

    D'Ortenzio, Matthew V.; Enomoto, Francis Y.; Johan, Sandra L.

    2005-01-01

    NASA is developing Intelligent Mission Management (IMM) technology for science missions employing long endurance unmanned aerial vehicles (UAV's). The IMM groundbased component is the Collaborative Decision Environment (CDE), a ground system that provides the Mission/Science team with situational awareness, collaboration, and decisionmaking tools. The CDE is used for pre-flight planning, mission monitoring, and visualization of acquired data. It integrates external data products used for planning and executing a mission, such as weather, satellite data products, and topographic maps by leveraging established and emerging Open Geospatial Consortium (OGC) standards to acquire external data products via the Internet, and an industry standard geographic information system (GIs) toolkit for visualization As a Science/Mission team may be geographically dispersed, the CDE is capable of providing access to remote users across wide area networks using Web Services technology. A prototype CDE is being developed for an instrument checkout flight on a manned aircraft in the fall of 2005, in preparation for a full deployment in support of the US Forest Service and NASA Ames Western States Fire Mission in 2006.

  4. Uav-Based Detection of Unknown Radioactive Biomass Deposits in Chernobyl's Exclusion Zone

    NASA Astrophysics Data System (ADS)

    Briechle, S.; Sizov, A.; Tretyak, O.; Antropov, V.; Molitor, N.; Krzystek, P.

    2018-05-01

    Shortly after the explosion of the Chernobyl nuclear power plant (ChNPP) in 1986, radioactive fall-out and contaminated trees (socalled Red Forest) were buried in the Chernobyl Exclusion Zone (ChEZ). These days, exact locations of the buried contaminated material are needed. Moreover, 3D vegetation maps are necessary to simulate the impact of tornados and forest fire. After 30 years, some of the so-called trenches and clamps are visible. However, some of them are overgrown and have slightly settled in the centimeter and decimeter range. This paper presents a pipeline that comprises 3D vegetation mapping and machine learning methods to precisely map trenches and clamps from remote sensing data. The dataset for our experiments consists of UAV-based LiDAR data, multi-spectral data, and aerial gamma-spectrometry data. Depending on the study areas overall accuracies ranging from 95.6 % to 99.0 % were reached for the classification of radioactive deposits. Our first results demonstrate an accurate and reliable UAV-based detection of unknown radioactive biomass deposits in the ChEZ.

  5. Multi-target Detection, Tracking, and Data Association on Road Networks Using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Barkley, Brett E.

    A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.

  6. Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Yan, Weizhong

    2001-03-01

    UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.

  7. UAV-borne X-band radar for MAV collision avoidance

    NASA Astrophysics Data System (ADS)

    Moses, Allistair A.; Rutherford, Matthew J.; Kontitsis, Michail; Valavanis, Kimon P.

    2011-05-01

    Increased use of Miniature (Unmanned) Aerial Vehicles (MAVs) is coincidentally accompanied by a notable lack of sensors suitable for enabling further increases in levels of autonomy and consequently, integration into the National Airspace System (NAS). The majority of available sensors suitable for MAV integration are based on infrared detectors, focal plane arrays, optical and ultrasonic rangefinders, etc. These sensors are generally not able to detect or identify other MAV-sized targets and, when detection is possible, considerable computational power is typically required for successful identification. Furthermore, performance of visual-range optical sensor systems can suffer greatly when operating in the conditions that are typically encountered during search and rescue, surveillance, combat, and most common MAV applications. However, the addition of a miniature radar system can, in consort with other sensors, provide comprehensive target detection and identification capabilities for MAVs. This trend is observed in manned aviation where radar systems are the primary detection and identification sensor system. Within this document a miniature, lightweight X-Band radar system for use on a miniature (710mm rotor diameter) rotorcraft is described. We present analyses of the performance of the system in a realistic scenario with two MAVs. Additionally, an analysis of MAV navigation and collision avoidance behaviors is performed to determine the effect of integrating radar systems into MAV-class vehicles.

  8. Visual Uav Trajectory Plan System Based on Network Map

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Lin, Z. J.; Su, G. Z.; Wu, B. Y.

    2012-07-01

    The base map of the current software UP-30 using in trajectory plan for Unmanned Aircraft Vehicle is vector diagram. UP-30 draws navigation points manually. But in the field of operation process, the efficiency and the quality of work is influenced because of insufficient information, screen reflection, calculate inconveniently and other factors. If we do this work in indoor, the effect of external factors on the results would be eliminated, the network earth users can browse the free world high definition satellite images through downloading a client software, and can export the high resolution image by standard file format. This brings unprecedented convenient of trajectory plan. But the images must be disposed by coordinate transformation, geometric correction. In addition, according to the requirement of mapping scale ,camera parameters and overlap degree we can calculate exposure hole interval and trajectory distance between the adjacent trajectory automatically . This will improve the degree of automation of data collection. Software will judge the position of next point according to the intersection of the trajectory and the survey area and ensure the position of point according to trajectory distance. We can undertake the points artificially. So the trajectory plan is automatic and flexible. Considering safety, the date can be used in flying after simulating flight. Finally we can export all of the date using a key

  9. Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method

    PubMed Central

    Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter

    2015-01-01

    An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254

  10. Multi-sensor Navigation System Design

    DOT National Transportation Integrated Search

    1971-03-01

    This report treats the design of naviggation systems that collect data from two or more on-board measurement subsystems and precess this data in an on-board computer. Such systems are called Multi-sensor Navigation Systems. : The design begins with t...

  11. Unmanned aerial vehicle (UAV)-based monitoring of a landslide: Gallenzerkogel landslide (Ybbs-Lower Austria) case study.

    PubMed

    Eker, Remzi; Aydın, Abdurrahim; Hübl, Johannes

    2017-12-19

    In the present study, UAV-based monitoring of the Gallenzerkogel landslide (Ybbs, Lower Austria) was carried out by three flight missions. High-resolution digital elevation models (DEMs), orthophotos, and density point clouds were generated from UAV-based aerial photos via structure-from-motion (SfM). According to ground control points (GCPs), an average of 4 cm root mean square error (RMSE) was found for all models. In addition, light detection and ranging (LIDAR) data from 2009, representing the prefailure topography, was utilized as a digital terrain model (DTM) and digital surface model (DSM). First, the DEM of difference (DoD) between the first UAV flight data and the LIDAR-DTM was determined and according to the generated DoD deformation map, an elevation difference of between - 6.6 and 2 m was found. Over the landslide area, a total of 4380.1 m 3 of slope material had been eroded, while 297.4 m 3 of the material had accumulated within the most active part of the slope. In addition, 688.3 m 3 of the total eroded material had belonged to the road destroyed by the landslide. Because of the vegetation surrounding the landslide area, the Multiscale Model-to-Model Cloud Comparison (M3C2) algorithm was then applied to compare the first and second UAV flight data. After eliminating both the distance uncertainty values of higher than 15 cm and the nonsignificant changes, the M3C2 distance obtained was between - 2.5 and 2.5 m. Moreover, the high-resolution orthophoto generated by the third flight allowed visual monitoring of the ongoing control/stabilization work in the area.

  12. Thermal Imaging of Subsurface Coal Fires by means of an Unmanned Aerial Vehicle (UAV) in the Autonomous Province Xinjiang, PRC

    NASA Astrophysics Data System (ADS)

    Vasterling, Margarete; Schloemer, Stefan; Fischer, Christian; Ehrler, Christoph

    2010-05-01

    Spontaneous combustion of coal and resulting coal fires lead to very high temperatures in the subsurface. To a large amount the heat is transferred to the surface by convective and conductive transport inducing a more or less pronounced thermal anomaly. During the past decade satellite-based infrared-imaging (ASTER, MODIS) was the method of choice for coal fire detection on a local and regional scale. However, the resolution is by far too low for a detailed analysis of single coal fires which is essential prerequisite for corrective measures (i.e. fire fighting) and calculation of carbon dioxide emission based on a complex correlation between energy release and CO2 generation. Consequently, within the framework of the Sino-German research project "Innovative Technologies for Exploration, Extinction and Monitoring of Coal Fires in Northern China", a new concept was developed and successfully tested. An unmanned aerial vehicle (UAV) was equipped with a lightweight camera for thermografic (resolution 160 by 120 pixel, dynamic range -20 to 250°C) and for visual imaging. The UAV designed as an octocopter is able to hover at GPS controlled waypoints during predefined flight missions. The application of a UAV has several advantages. Compared to point measurements on the ground the thermal imagery quickly provides the spatial distribution of the temperature anomaly with a much better resolution. Areas otherwise not accessible (due to topography, fire induced cracks, etc.) can easily be investigated. The results of areal surveys on two coal fires in Xinjiang are presented. Georeferenced thermal and visual images were mosaicked together and analyzed. UAV-born data do well compared to temperatures measured directly on the ground and cover large areas in detail. However, measuring surface temperature alone is not sufficient. Simultaneous measurements made at the surface and in roughly 15cm depth proved substantial temperature gradients in the upper soil. Thus the temperature measured at the surface underestimates the energy emitted by the subsurface coal fire. In addition, surface temperature is strongly influenced by solar radiation and the prevailing ambient conditions (wind, temperature, humidity). As a consequence there is no simple correlation between surface and subsurface soil temperature. Efforts have been made to set up a coupled energy transport and energy balance model for the near surface considering thermal conduction, solar irradiation, thermal radiative energy and ambient temperature so far. The model can help to validate space-born and UAV-born thermal imagery and link surface to subsurface temperature but depends on in-situ measurements for input parameter determination and calibration. Results obtained so far strongly necessitate the integration of different data sources (in-situ / remote; point / area; local / medium scale) to obtain a reliable energy release estimation which is then used for coal fire characterization.

  13. Lifting business process diagrams to 2.5 dimensions

    NASA Astrophysics Data System (ADS)

    Effinger, Philip; Spielmann, Johannes

    2010-01-01

    In this work, we describe our visualization approach for business processes using 2.5 dimensional techniques (2.5D). The idea of 2.5D is to add the concept of layering to a two dimensional (2D) visualization. The layers are arranged in a three-dimensional display space. For the modeling of the business processes, we use the Business Process Modeling Notation (BPMN). The benefit of connecting BPMN with a 2.5D visualization is not only to obtain a more abstract view on the business process models but also to develop layering criteria that eventually increase readability of the BPMN model compared to 2D. We present a 2.5D Navigator for BPMN models that offers different perspectives for visualization. Therefore we also develop BPMN specific perspectives. The 2.5D Navigator combines the 2.5D approach with perspectives and allows free navigation in the three dimensional display space. We also demonstrate our tool and libraries used for implementation of the visualizations. The underlying general framework for 2.5D visualizations is explored and presented in a fashion that it can easily be used for different applications. Finally, an evaluation of our navigation tool demonstrates that we can achieve satisfying and aesthetic displays of diagrams stating BPMN models in 2.5D-visualizations.

  14. The effects of link format and screen location on visual search of web pages.

    PubMed

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  15. Image processing and applications based on visualizing navigation service

    NASA Astrophysics Data System (ADS)

    Hwang, Chyi-Wen

    2015-07-01

    When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.

  16. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  17. Applications of UAVs to Measurement and Monitoring of Anthropogenic Contamination of an Urban Wildlife Preserve

    NASA Astrophysics Data System (ADS)

    Higa, E.; Valencia, D.; Hunt, A.

    2017-12-01

    Over the past decade, the use of unmanned aerial vehicles (UAV's) has seen unprecedented growth in diverse research areas due to advances in UAV hardware and reduced total operating costs. These developments have given environmental investigators a new aerial data acquisition technique that can be used to not only survey large areas of terrain in a time-efficient and cost-effective manner but can be used to gather previously almost unattainable air quality data. Vertically resolved profiles of air pollutant data can be readily constructed. This project's goal is to produce a time resolved (seasonal) aerial survey of a 150-acre section from a 1300-acre ecologically diverse park of bottomland forests, wetlands and prairies. This ecosystem provides abundant habitats for a diverse wildlife community. This section was chosen due to its close proximity to the city landfill located 0.5 miles due north from the chosen section. The process of collecting UAV aerial images at a constant altitude of ( 200ft) on a bi-monthly basis (for a period of 6 months) has commenced. The UAV has been fitted with a custom made mount to secure an Ultrafine Particle (UFP) counter; this is providing information on UFP levels over the study area as a proxy for airborne particle inputs to the site. Sediment samples will be taken from several runoff ponds within the survey area to evaluate possible anthropogenic contamination of the park . Post processing imaging software, DroneDeploy, is being used to create an orthomosaic, topographic surface and 3D model that can be integrated with GIS platforms to create a comprehensive and cohesive multi-layered data set. Data sets of this nature will provide information on temporally constrained sources of runoff material to the pond areas in the preserve.

  18. Vegetation Removal from Uav Derived Dsms, Using Combination of RGB and NIR Imagery

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Vlachos, M.

    2018-05-01

    Current advancements on photogrammetric software along with affordability and wide spreading of Unmanned Aerial Vehicles (UAV), allow for rapid, timely and accurate 3D modelling and mapping of small to medium sized areas. Although the importance and applications of large format aerial overlaps cameras and photographs in Digital Surface Model (DSM) production and LIDAR data is well documented in literature, this is not the case for UAV photography. Additionally, the main disadvantage of photogrammetry is the inability to map the dead ground (terrain), when we deal with areas that include vegetation. This paper assesses the use of near-infrared imagery captured by small UAV platforms to automatically remove vegetation from Digital Surface Models (DSMs) and obtain a Digital Terrain Model (DTM). Two areas were tested, based on the availability of ground reference points, both under trees and among vegetation, as well as on terrain. In addition, RGB and near-infrared UAV photography was captured and processed using Structure from Motion (SfM) and Multi View Stereo (MVS) algorithms to generate DSMs and corresponding colour and NIR orthoimages with 0.2 m and 0.25 m as pixel size respectively for the two test sites. Moreover, orthophotos were used to eliminate the vegetation from the DSMs using NDVI index, thresholding and masking. Following that, different interpolation algorithms, according to the test sites, were applied to fill in the gaps and created DTMs. Finally, a statistic analysis was made using reference terrain points captured on field, both on dead ground and under vegetation to evaluate the accuracy of the whole process and assess the overall accuracy of the derived DTMs in contrast with the DSMs.

  19. Moving in Dim Light: Behavioral and Visual Adaptations in Nocturnal Ants.

    PubMed

    Narendra, Ajay; Kamhi, J Frances; Ogawa, Yuri

    2017-11-01

    Visual navigation is a benchmark information processing task that can be used to identify the consequence of being active in dim-light environments. Visual navigational information that animals use during the day includes celestial cues such as the sun or the pattern of polarized skylight and terrestrial cues such as the entire panorama, canopy pattern, or significant salient features in the landscape. At night, some of these navigational cues are either unavailable or are significantly dimmer or less conspicuous than during the day. Even under these circumstances, animals navigate between locations of importance. Ants are a tractable system for studying navigation during day and night because the fine scale movement of individual animals can be recorded in high spatial and temporal detail. Ant species range from being strictly diurnal, crepuscular, and nocturnal. In addition, a number of species have the ability to change from a day- to a night-active lifestyle owing to environmental demands. Ants also offer an opportunity to identify the evolution of sensory structures for discrete temporal niches not only between species but also within a single species. Their unique caste system with an exclusive pedestrian mode of locomotion in workers and an exclusive life on the wing in males allows us to disentangle sensory adaptations that cater for different lifestyles. In this article, we review the visual navigational abilities of nocturnal ants and identify the optical and physiological adaptations they have evolved for being efficient visual navigators in dim-light. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  20. Navigation ability dependent neural activation in the human brain: an fMRI study.

    PubMed

    Ohnishi, Takashi; Matsuda, Hiroshi; Hirakata, Makiko; Ugawa, Yoshikazu

    2006-08-01

    Visual-spatial navigation in familiar and unfamiliar environments is an essential requirement of daily life. Animal studies indicated the importance of the hippocampus for navigation. Neuroimaging studies demonstrated gender difference or strategies dependent difference of neural substrates for navigation. Using functional magnetic resonance imaging, we measured brain activity related to navigation in four groups of normal volunteers: good navigators (males and females) and poor navigators (males and females). In a whole group analysis, task related activity was noted in the hippocampus, parahippocampal gyrus, posterior cingulate cortex, precuneus, parietal association areas, and the visual association areas. In group comparisons, good navigators showed a stronger activation in the medial temporal area and precuneus than poor navigators. There was neither sex effect nor interaction effect between sex and navigation ability. The activity in the left medial temporal areas was positively correlated with task performance, whereas activity in the right parietal area was negatively correlated with task performance. Furthermore, the activity in the bilateral medial temporal areas was positively correlated with scores reflecting preferred navigation strategies, whereas activity in the bilateral superior parietal lobules was negatively correlated with them. Our data suggest that different brain activities related to navigation should reflect navigation skill and strategies.

  1. A Small Autonomous Unmanned Aerial Vehicle, Ant-Plane 4, for aeromagnetic survey

    NASA Astrophysics Data System (ADS)

    Funaki, M.; Tanabe, S.; Project, A.

    2007-05-01

    Autonomous unmanned aerial vehicles (UAV) are expected to use in Antarctica for geophysical research due to economy and safety operations. We have developed the technology of small UAVwith autonomous navigation referred to GPS and onboard magnetometer, meteorolgical devices and digital camera under the Ant-Plane project. The UAV focuses on operation for use in the summer season at coastal area in Antarctica; higher temperature than -15C under calm wind. In case of Ant-Plane 4, it can fly continuously more than 500 km, probably more than 1000 km, although the flight in Antarcitca has not succeeded The UAV of FRP is pusher type drone consisting of 2.6m span and 2.0m length with 2-cycles and 2-cylinder 86cc gasoline engine (7.2 HP) navigated. The maximum takeoff weight is 25kg including 1kg of payload. Cruising distance 500 km at speed of 130 km/h using 10 litter of fuel. The UAV is controlled by radio telemeter within 5km from a ground station and autonomous navigation referred to GPS latitude and longitude, pitot tube speed and barometer altitude. The magnetometer system consists of a 3-component magneto-resistant magnetometer (MR) sensor (Honeywell HMR2300), GPS and data logger. Three components of magnetic field, latitude, longitude, altitude, the number of satellite and time are recorded every second during 6 hours. The sensitivity of the magnetometer is 7 nT and we use a total magnetic field intensity for magnetic analysis due to unknown direction of heading of the plane. We succeeded in long distant flight to 500km with magnetometer by Ant-Plane 4 collaborated with Geoscience Australia, in March 2006. The survey was performed in the area 10kmx10km at Kalgoorlie, Western Australia. The magnetic data are obtained from 41 courses (250m in interval) of EW direction. The altitude of the flight was 900m from sea level and 500m from the runway. MR-magnetometer sensor was installed at the tip of a FRP pipe of 1m length, and the pipe was fixed to the head of the plane in order to reduce the plane magnetization. After 4 hours 14 minutes from the takeoff, the 500km flight was accomplished and the magnetic data were stored in the data logger. The straight flight course was almost consistent with the way point course, but the course was drastically disturbed when the plane was turning. The resolution of magnetic field decreased to 30nT, when the plane flew to the tail wind. However, it is worse against the head wind. Obtained anomaly pattern was compared with the magnetic anomaly pattern published by Geoscience Australia. Both patterns were essentially consistent, although a part of pattern in the head wind flights was not resemble. Ant-Plane 4 flew up to 5700 m in altitude with aerosol counter, thermometer and hygrometer at northern part of Japan. A drastic change of temperature, humidity and particle number was observed at the inversion layer of atmosphere. Consequently we conclude that the small drone Ant-Plane 4 can be used for geophysical research. We are making effort to develop Ant-Plane for more simple assemblage and more easy operation.

  2. Assessing the mechanism of response in the retrosplenial cortex of good and poor navigators☆

    PubMed Central

    Auger, Stephen D.; Maguire, Eleanor A.

    2013-01-01

    The retrosplenial cortex (RSC) is consistently engaged by a range of tasks that examine episodic memory, imagining the future, spatial navigation, and scene processing. Despite this, an account of its exact contribution to these cognitive functions remains elusive. Here, using functional MRI (fMRI) and multi-voxel pattern analysis (MVPA) we found that the RSC coded for the specific number of permanent outdoor items that were in view, that is, items which are fixed and never change their location. Moreover, this effect was selective, and was not apparent for other item features such as size and visual salience. This detailed detection of the number of permanent items in view was echoed in the parahippocampal cortex (PHC), although the two brain structures diverged when participants were divided into good and poor navigators. There was no difference in the responsivity of the PHC between the two groups, while significantly better decoding of the number of permanent items in view was possible from patterns of activity in the RSC of good compared to poor navigators. Within good navigators, the RSC also facilitated significantly better prediction of item permanence than the PHC. Overall, these findings suggest that the RSC in particular is concerned with coding the presence of every permanent item that is in view. This mechanism may represent a key building block for spatial and scene representations that are central to episodic memories and imagining the future, and could also be a prerequisite for successful navigation. PMID:24012136

  3. An evaluation of unisensory and multisensory adaptive flight-path navigation displays

    NASA Astrophysics Data System (ADS)

    Moroney, Brian W.

    1999-11-01

    The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)

  4. Indoor Navigation by People with Visual Impairment Using a Digital Sign System

    PubMed Central

    Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156

  5. Assistive obstacle detection and navigation devices for vision-impaired users.

    PubMed

    Ong, S K; Zhang, J; Nee, A Y C

    2013-09-01

    Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.

  6. Development of voice navigation system for the visually impaired by using IC tags.

    PubMed

    Takatori, Norihiko; Nojima, Kengo; Matsumoto, Masashi; Yanashima, Kenji; Magatani, Kazushige

    2006-01-01

    There are about 300,000 visually impaired persons in Japan. Most of them are old persons and, cannot become skillful in using a white cane, even if they make effort to learn how to use a white cane. Therefore, some guiding system that supports the independent activities of the visually impaired are required. In this paper, we will describe about a developed white cane system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines that include IC tags and an intelligent white cane that has a navigation computer. In our system colored navigation lines that are put on the floor of the target space from the start point to the destination and IC tags that are set at the landmark point are used for indication of the route to the destination. The white cane has a color sensor, an IC tag transceiver and a computer system that includes a voice processor. This white cane senses the navigation line that has target color by a color sensor. When a color sensor finds the target color, the white cane informs a white cane user that he/she is on the navigation line by vibration. So, only following this vibration, the user can reach the destination. However, at some landmark points, guidance is necessary. At these points, an IC tag is set under the navigation line. The cane makes communication with the tag and informs the user about the land mark pint by pre recorded voice. Ten normal subjects who were blindfolded were tested with our developed system. All of them could walk along navigation line. And the IC tag information system worked well. Therefore, we have concluded that our system will be a very valuable one to support activities of the visually impaired.

  7. Visual orientation and navigation in nocturnal arthropods.

    PubMed

    Warrant, Eric; Dacke, Marie

    2010-01-01

    With their highly sensitive visual systems, the arthropods have evolved a remarkable capacity to orient and navigate at night. Whereas some navigate under the open sky, and take full advantage of the celestial cues available there, others navigate in more difficult conditions, such as through the dense understory of a tropical rainforest. Four major classes of orientation are performed by arthropods at night, some of which involve true navigation (i.e. travel to a distant goal that lies beyond the range of direct sensory contact): (1) simple straight-line orientation, typically for escape purposes; (2) nightly short-distance movements relative to a shoreline, typically in the context of feeding; (3) long-distance nocturnal migration at high altitude in the quest to locate favorable feeding or breeding sites, and (4) nocturnal excursions to and from a fixed nest or food site (i.e. homing), a task that in most species involves path integration and/or the learning and recollection of visual landmarks. These four classes of orientation--and their visual basis--are reviewed here, with special emphasis given to the best-understood animal systems that are representative of each. 2010 S. Karger AG, Basel.

  8. Efficiency improvement by navigated safety inspection involving visual clutter based on the random search model.

    PubMed

    Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao

    2018-06-25

    Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.

  9. 33 CFR 175.135 - Existing equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...

  10. 33 CFR 175.135 - Existing equipment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...

  11. 33 CFR 175.135 - Existing equipment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...

  12. 33 CFR 175.135 - Existing equipment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...

  13. 33 CFR 175.135 - Existing equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...

  14. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  15. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  16. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  17. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  18. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  19. Classical Photogrammetry and Uav - Selected Ascpects

    NASA Astrophysics Data System (ADS)

    Mikrut, S.

    2016-06-01

    The UAV technology seems to be highly future-oriented due to its low costs as compared to traditional aerial images taken from classical photogrammetry aircrafts. The AGH University of Science and Technology in Cracow - Department of Geoinformation, Photogrammetry and Environmental Remote Sensing focuses mainly on geometry and radiometry of recorded images. Various scientific research centres all over the world have been conducting the relevant research for years. The paper presents selected aspects of processing digital images made with the UAV technology. It provides on a practical example a comparison between a digital image taken from an airborne (classical) height, and the one made from an UAV level. In his research the author of the paper is trying to find an answer to the question: to what extent does the UAV technology diverge today from classical photogrammetry, and what are the advantages and disadvantages of both methods? The flight plan was made over the Tokarnia Village Museum (more than 0.5 km2) for two separate flights: the first was made by an UAV - System FT-03A built by FlyTech Solution Ltd. The second was made with the use of a classical photogrammetric Cesna aircraft furnished with an airborne photogrammetric camera (Ultra Cam Eagle). Both sets of photographs were taken with pixel size of about 3 cm, in order to have reliable data allowing for both systems to be compared. The project has made aerotriangulation independently for the two flights. The DTM was generated automatically, and the last step was the generation of an orthophoto. The geometry of images was checked under the process of aerotriangulation. To compare the accuracy of these two flights, control and check points were used. RMSE were calculated. The radiometry was checked by a visual method and using the author's own algorithm for feature extraction (to define edges with subpixel accuracy). After initial pre-processing of data, the images were put together, and shown side by side. Buildings and strips on the road were selected from whole data for the comparison of edges and details. The details on UAV images were not worse than those on classical photogrammetric ones. One might suppose that geometrically they also were correct. The results of aerotriangulation prove these facts, too. Final results from aerotriangulation were on the level of RMS = 1 pixel (about 3 cm). In general it can be said that photographs from UAVs are not worse than classic ones. In the author's opinion, geometric and radiometric qualities are at a similar level for this kind of area (a small village). This is a very significant result as regards mapping. It means that UAV data can be used in mapping production.

  20. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.

  1. UGV navigation in wireless sensor and actuator network environments

    NASA Astrophysics Data System (ADS)

    Zhang, Guyu; Li, Jianfeng; Duncan, Christian A.; Kanno, Jinko; Selmic, Rastko R.

    2012-06-01

    We consider a navigation problem in a distributed, self-organized and coordinate-free Wireless Sensor and Ac- tuator Network (WSAN). We rst present navigation algorithms that are veried using simulation results. Con- sidering more than one destination and multiple mobile Unmanned Ground Vehicles (UGVs), we introduce a distributed solution to the Multi-UGV, Multi-Destination navigation problem. The objective of the solution to this problem is to eciently allocate UGVs to dierent destinations and carry out navigation in the network en- vironment that minimizes total travel distance. The main contribution of this paper is to develop a solution that does not attempt to localize either the UGVs or the sensor and actuator nodes. Other than some connectivity as- sumptions about the communication graph, we consider that no prior information about the WSAN is available. The solution presented here is distributed, and the UGV navigation is solely based on feedback from neigh- boring sensor and actuator nodes. One special case discussed in the paper, the Single-UGV, Multi-Destination navigation problem, is essentially equivalent to the well-known and dicult Traveling Salesman Problem (TSP). Simulation results are presented that illustrate the navigation distance traveled through the network. We also introduce an experimental testbed for the realization of coordinate-free and localization-free UGV navigation. We use the Cricket platform as the sensor and actuator network and a Pioneer 3-DX robot as the UGV. The experiments illustrate the UGV navigation in a coordinate-free WSAN environment where the UGV successfully arrives at the assigned destinations.

  2. Phylo-VISTA: Interactive visualization of multiple DNA sequence alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nameeta; Couronne, Olivier; Pennacchio, Len A.

    The power of multi-sequence comparison for biological discovery is well established. The need for new capabilities to visualize and compare cross-species alignment data is intensified by the growing number of genomic sequence datasets being generated for an ever-increasing number of organisms. To be efficient these visualization algorithms must support the ability to accommodate consistently a wide range of evolutionary distances in a comparison framework based upon phylogenetic relationships. Results: We have developed Phylo-VISTA, an interactive tool for analyzing multiple alignments by visualizing a similarity measure for multiple DNA sequences. The complexity of visual presentation is effectively organized using a frameworkmore » based upon interspecies phylogenetic relationships. The phylogenetic organization supports rapid, user-guided interspecies comparison. To aid in navigation through large sequence datasets, Phylo-VISTA leverages concepts from VISTA that provide a user with the ability to select and view data at varying resolutions. The combination of multiresolution data visualization and analysis, combined with the phylogenetic framework for interspecies comparison, produces a highly flexible and powerful tool for visual data analysis of multiple sequence alignments. Availability: Phylo-VISTA is available at http://www-gsd.lbl. gov/phylovista. It requires an Internet browser with Java Plugin 1.4.2 and it is integrated into the global alignment program LAGAN at http://lagan.stanford.edu« less

  3. A survey of simultaneous localization and mapping on unstructured lunar complex environment

    NASA Astrophysics Data System (ADS)

    Wang, Yiqiao; Zhang, Wei; An, Pei

    2017-10-01

    Simultaneous localization and mapping (SLAM) technology is the key to realizing lunar rover's intelligent perception and autonomous navigation. It embodies the autonomous ability of mobile robot, and has attracted plenty of concerns of researchers in the past thirty years. Visual sensors are meaningful to SLAM research because they can provide a wealth of information. Visual SLAM uses merely images as external information to estimate the location of the robot and construct the environment map. Nowadays, SLAM technology still has problems when applied in large-scale, unstructured and complex environment. Based on the latest technology in the field of visual SLAM, this paper investigates and summarizes the SLAM technology using in the unstructured complex environment of lunar surface. In particular, we focus on summarizing and comparing the detection and matching of features of SIFT, SURF and ORB, in the meanwhile discussing their advantages and disadvantages. We have analyzed the three main methods: SLAM Based on Extended Kalman Filter, SLAM Based on Particle Filter and SLAM Based on Graph Optimization (EKF-SLAM, PF-SLAM and Graph-based SLAM). Finally, this article summarizes and discusses the key scientific and technical difficulties in the lunar context that Visual SLAM faces. At the same time, we have explored the frontier issues such as multi-sensor fusion SLAM and multi-robot cooperative SLAM technology. We also predict and prospect the development trend of lunar rover SLAM technology, and put forward some ideas of further research.

  4. Aging and Sensory Substitution in a Virtual Navigation Task.

    PubMed

    Levy-Tzedek, S; Maidenbaum, S; Amedi, A; Lackner, J

    2016-01-01

    Virtual environments are becoming ubiquitous, and used in a variety of contexts-from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.

  5. From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation

    PubMed Central

    Chan, Edgar; Baumann, Oliver; Bellgrove, Mark A.; Mattingley, Jason B.

    2012-01-01

    Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of objects as navigational landmarks based on previous behavioral and neuroanatomical findings in rodents and humans. Evidence is presented showing that single environmental objects can function as navigational beacons, or act as associative or orientation cues. In addition, we argue that extended surfaces or boundaries can act as landmarks by providing a frame of reference for encoding spatial information. The present review provides a concise taxonomy of the use of visual objects as landmarks in navigation and should serve as a useful reference for future research into landmark-based spatial navigation. PMID:22969737

  6. VISUAL3D - An EIT network on visualization of geomodels

    NASA Astrophysics Data System (ADS)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan Universität Leoben, Slovenian National Building and Civil Engineering Institute, Tallinn University of Technology and Turku University. The infrastructure within the network comprises different types of capturing and visualization hardware, ranging from high resolution cubes, VR walls, VR goggle solutions, high resolution photogrammetry, UAVs, lidar-scanners, and many more.

  7. Very high resolution crop surface models (CSMs) from UAV-based stereo images for rice growth monitoring In Northeast China

    NASA Astrophysics Data System (ADS)

    Bendig, J.; Willkomm, M.; Tilly, N.; Gnyp, M. L.; Bennertz, S.; Qiang, C.; Miao, Y.; Lenz-Wiedemann, V. I. S.; Bareth, G.

    2013-08-01

    Unmanned aerial vehicles (UAVs) became popular platforms for the collection of remotely sensed geodata in the last years (Hardin & Jensen 2011). Various applications in numerous fields of research like archaeology (Hendrickx et al., 2011), forestry or geomorphology evolved (Martinsanz, 2012). This contribution deals with the generation of multi-temporal crop surface models (CSMs) with very high resolution by means of low-cost equipment. The concept of the generation of multi-temporal CSMs using Terrestrial Laserscanning (TLS) has already been introduced by Hoffmeister et al. (2010). For this study, data acquisition was performed with a low-cost and low-weight Mini-UAV (< 5 kg). UAVs in general and especially smaller ones, like the system presented here, close a gap in small scale remote sensing (Berni et al., 2009; Watts et al., 2012). In precision agriculture frequent remote sensing on such scales during the vegetation period provides important spatial information on the crop status. Crop growth variability can be detected by comparison of the CSMs in different phenological stages. Here, the focus is on the detection of this variability and its dependency on cultivar and plant treatment. The method has been tested for data acquired on a barley experiment field in Germany. In this contribution, it is applied to a different crop in a different environment. The study area is an experiment field for rice in Northeast China (Sanjiang Plain). Three replications of the cultivars Kongyu131 and Longjing21 were planted in plots that were treated with different amounts of N-fertilizer. In July 2012 three UAV-campaigns were carried out. Establishment of ground control points (GCPs) allowed for ground truth. Additionally, further destructive and non-destructive field data were collected. The UAV-system is an MK-Okto by Hisystems (http://www.mikrokopter.de) which was equipped with the high resolution Panasonic Lumix GF3 12 megapixel consumer camera. The self-built and self-maintained system has a payload of up to 1 kg and an average flight time of 15 minutes. The maximum speed is around 30 km/h and the system can be operated up to a wind speed of less than 19 km/h (Beaufort scale number 3 for wind speed). Using a suitable flight plan stereo images can be captured. For this study, a flying height of 50 m and a 44% side and 90% forward overlap was chosen. The images are processed into CSMs under the use of the Structure from Motion (SfM)-based software Agisoft Photoscan 0.9.0. The resulting models have a resolution of 0.02 m and an average number of about 12 million points. Further data processing in Esri ArcGIS allows for quantitative comparison of the plant heights. The multi-temporal datasets are analysed on a plot size basis. The results can be compared to and combined with the additional field data. Detecting plant height with non-invasive measurement techniques enables analysis of its correlation to biomass and other crop parameters (Hansen & Schjoerring, 2003; Thenkabail et al., 2000) measured in the field. The method presented here can therefore be a valuable addition for the recognition of such correlations.

  8. Visual landmarks facilitate rodent spatial navigation in virtual reality environments

    PubMed Central

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484

  9. A research on the positioning technology of vehicle navigation system from single source to "ASPN"

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Li, Haizhou; Chen, Yu; Chen, Hongyue; Sun, Qian

    2017-10-01

    Due to the suddenness and complexity of modern warfare, land-based weapon systems need to have precision strike capability on roads and railways. The vehicle navigation system is one of the most important equipments for the land-based weapon systems that have precision strick capability. There are inherent shortcomings for single source navigation systems to provide continuous and stable navigation information. To overcome the shortcomings, the multi-source positioning technology is developed. The All Source Positioning and Navigaiton (ASPN) program was proposed in 2010, which seeks to enable low cost, robust, and seamless navigation solutions for military to use on any operational platform and in any environment with or without GPS. The development trend of vehicle positioning technology was reviewed in this paper. The trend indicates that the positioning technology is developed from single source and multi-source to ASPN. The data fusion techniques based on multi-source and ASPN was analyzed in detail.

  10. 33 CFR 175.113 - Launchers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...

  11. 33 CFR 175.113 - Launchers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...

  12. 33 CFR 175.113 - Launchers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...

  13. 33 CFR 175.113 - Launchers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...

  14. 33 CFR 175.113 - Launchers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...

  15. Control system of hexacopter using color histogram footprint and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Ruliputra, R. N.; Darma, S.

    2017-07-01

    The development of unmanned aerial vehicles (UAV) has been growing rapidly in recent years. The use of logic thinking which is implemented into the program algorithms is needed to make a smart system. By using visual input from a camera, UAV is able to fly autonomously by detecting a target. However, some weaknesses arose as usage in the outdoor environment might change the target's color intensity. Color histogram footprint overcomes the problem because it divides color intensity into separate bins that make the detection tolerant to the slight change of color intensity. Template matching compare its detection result with a template of the reference image to determine the target position and use it to position the vehicle in the middle of the target with visual feedback control based on Proportional-Integral-Derivative (PID) controller. Color histogram footprint method localizes the target by calculating the back projection of its histogram. It has an average success rate of 77 % from a distance of 1 meter. It can position itself in the middle of the target by using visual feedback control with an average positioning time of 73 seconds. After the hexacopter is in the middle of the target, Convolutional Neural Networks (CNN) classifies a number contained in the target image to determine a task depending on the classified number, either landing, yawing, or return to launch. The recognition result shows an optimum success rate of 99.2 %.

  16. Analysis of Unmanned Aerial Vehicle (UAV) hyperspectral remote sensing monitoring key technology in coastal wetland

    NASA Astrophysics Data System (ADS)

    Ma, Yi; Zhang, Jie; Zhang, Jingyu

    2016-01-01

    The coastal wetland, a transitional zone between terrestrial ecosystems and marine ecosystems, is the type of great value to ecosystem services. For the recent 3 decades, area of the coastal wetland is decreasing and the ecological function is gradually degraded with the rapid development of economy, which restricts the sustainable development of economy and society in the coastal areas of China in turn. It is a major demand of the national reality to carry out the monitoring of coastal wetlands, to master the distribution and dynamic change. UAV, namely unmanned aerial vehicle, is a new platform for remote sensing. Compared with the traditional satellite and manned aerial remote sensing, it has the advantage of flexible implementation, no cloud cover, strong initiative and low cost. Image-spectrum merging is one character of high spectral remote sensing. At the same time of imaging, the spectral curve of each pixel is obtained, which is suitable for quantitative remote sensing, fine classification and target detection. Aimed at the frontier and hotspot of remote sensing monitoring technology, and faced the demand of the coastal wetland monitoring, this paper used UAV and the new remote sensor of high spectral imaging instrument to carry out the analysis of the key technologies of monitoring coastal wetlands by UAV on the basis of the current situation in overseas and domestic and the analysis of developing trend. According to the characteristic of airborne hyperspectral data on UAV, that is "three high and one many", the key technology research that should develop are promoted as follows: 1) the atmosphere correction of the UAV hyperspectral in coastal wetlands under the circumstance of complex underlying surface and variable geometry, 2) the best observation scale and scale transformation method of the UAV platform while monitoring the coastal wetland features, 3) the classification and detection method of typical features with high precision from multi scale hyperspectral images based on time sequence. The research results of this paper will help to break the traditional concept of remote sensing monitoring coastal wetlands by satellite and manned aerial vehicle, lead the trend of this monitoring technology, and put forward a new technical proposal for grasping the distribution of the coastal wetland and the changing trend and carrying out the protection and management of the coastal wetland.

  17. D Virtual CH Interactive Information Systems for a Smart Web Browsing Experience for Desktop Pcs and Mobile Devices

    NASA Astrophysics Data System (ADS)

    Scianna, A.; La Guardia, M.

    2018-05-01

    Recently, the diffusion of knowledge on Cultural Heritage (CH) has become an element of primary importance for its valorization. At the same time, the diffusion of surveys based on UAV Unmanned Aerial Vehicles (UAV) technologies and new methods of photogrammetric reconstruction have opened new possibilities for 3D CH representation. Furthermore the recent development of faster and more stable internet connections leads people to increase the use of mobile devices. In the light of all this, the importance of the development of Virtual Reality (VR) environments applied to CH is strategic for the diffusion of knowledge in a smart solution. In particular, the present work shows how, starting from a basic survey and the further photogrammetric reconstruction of a cultural good, is possible to built a 3D CH interactive information system useful for desktop and mobile devices. For this experimentation the Arab-Norman church of the Trinity of Delia (in Castelvetrano-Sicily-Italy) has been adopted as case study. The survey operations have been carried out considering different rapid methods of acquisition (UAV camera, SLR camera and smartphone camera). The web platform to publish the 3D information has been built using HTML5 markup language and WebGL JavaScript libraries (Three.js libraries). This work presents the construction of a 3D navigation system for a web-browsing of a virtual CH environment, with the integration of first person controls and 3D popup links. This contribution adds a further step to enrich the possibilities of open-source technologies applied to the world of CH valorization on web.

  18. Possibilities of Uas for Maritime Monitoring

    NASA Astrophysics Data System (ADS)

    Klimkowska, A.; Lee, I.; Choi, K.

    2016-06-01

    In the last few years, Unmanned Aircraft Systems (UAS) have become more important and its use for different application is appreciated. At the beginning UAS were used for military purposes. These successful applications initiated interest among researchers to find uses of UAS for civilian purposes, as they are alternative to both manned and satellite systems in acquiring high-resolution remote sensing data at lower cost while long flight duration. As UAS are built from many components such as unmanned aerial vehicle (UAV), sensing payloads, communication systems, ground control stations, recovery and launch equipment, and supporting equipment, knowledge about its functionality and characteristics is crucial for missions. Therefore, finding appropriate configuration of all elements to fulfill requirements of the mission is a very difficult, yet important task. UAS may be used in various maritime applications such as ship detection, red tide detection and monitoring, border patrol, tracking of pollution at sea and hurricane monitoring just to mention few. One of the greatest advantages of UAV is their ability to fly over dangerous and hazardous areas, where sending manned aircraft could be risky for a crew. In this article brief description of aerial unmanned system components is introduced. Firstly characteristics of unmanned aerial vehicles are presented, it continues with introducing inertial navigation system, communication systems, sensing payloads, ground control stations, and ground and recovery equipment. Next part introduces some examples of UAS for maritime applications. This is followed by suggestions of key indicators which should be taken into consideration while choosing UAS. Last part talks about configuration schemes of UAVs and sensor payloads suggested for some maritime applications.

  19. Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) on the Vision II turbine rotorcraft UAV over the Florida Keys

    NASA Astrophysics Data System (ADS)

    Holasek, R. E.; Nakanishi, K.; Swartz, B.; Zacaroli, R.; Hill, B.; Naungayan, J.; Herwitz, S.; Kavros, P.; English, D. C.

    2013-12-01

    As part of the NASA ROSES program, the NovaSol Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) was flown as the payload on the unmanned Vision II helicopter. The goal of the May 2013 data collection was to obtain high resolution visible and near-infrared (visNIR) hyperspectral data of seagrasses and coral reefs in the Florida Keys. The specifications of the SHARK hyperspectral system and the Vision II turbine rotorcraft will be described along with the process of integrating the payload to the vehicle platform. The minimal size, weight, and power (SWaP) specifications of the SHARK system is an ideal match to the Vision II helicopter and its flight parameters. One advantage of the helicopter over fixed wing platforms is its inherent ability to take off and land in a limited area and without a runway, enabling the UAV to be located in close proximity to the experiment areas and the science team. Decisions regarding integration times, waypoint selection, mission duration, and mission frequency are able to be based upon the local environmental conditions and can be modified just prior to take off. The operational procedures and coordination between the UAV pilot, payload operator, and scientist will be described. The SHARK system includes an inertial navigation system and digital elevation model (DEM) which allows image coordinates to be calculated onboard the aircraft in real-time. Examples of the geo-registered images from the data collection will be shown. SHARK mounted below VTUAV. SHARK deployed on VTUAV over water.

  20. Goal-oriented robot navigation learning using a multi-scale space representation.

    PubMed

    Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A

    2015-12-01

    There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Are visual cues helpful for virtual spatial navigation and spatial memory in patients with mild cognitive impairment or Alzheimer's disease?

    PubMed

    Cogné, Mélanie; Auriacombe, Sophie; Vasa, Louise; Tison, François; Klinger, Evelyne; Sauzéon, Hélène; Joseph, Pierre-Alain; N Kaoua, Bernard

    2018-05-01

    To evaluate whether visual cues are helpful for virtual spatial navigation and memory in Alzheimer's disease (AD) and patients with mild cognitive impairment (MCI). 20 patients with AD, 18 patients with MCI and 20 age-matched healthy controls (HC) were included. Participants had to actively reproduce a path that included 5 intersections with one landmark at each intersection that they had seen previously during a learning phase. Three cueing conditions for navigation were offered: salient landmarks, directional arrows and a map. A path without additional visual stimuli served as control condition. Navigation time and number of trajectory mistakes were recorded. With the presence of directional arrows, no significant difference was found between groups concerning the number of trajectory mistakes and navigation time. The number of trajectory mistakes did not differ significantly between patients with AD and patients with MCI on the path with arrows, the path with salient landmarks and the path with a map. There were significant correlations between the number of trajectory mistakes under the arrow condition and executive tests, and between the number of trajectory mistakes under the salient landmark condition and memory tests. Visual cueing such as directional arrows and salient landmarks appears helpful for spatial navigation and memory tasks in patients with AD and patients with MCI. This study opens new research avenues for neuro-rehabilitation, such as the use of augmented reality in real-life settings to support the navigational capabilities of patients with MCI and patients with AD. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?

    PubMed

    Carman, Heidi M; Mactutus, Charles F

    2002-09-01

    Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.

  3. Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.

    PubMed

    Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha

    2018-06-01

    Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.

  4. Implementation of an oblique-sectioning visualization tool for line-of-sight stereotactic neurosurgical navigation using the AVW toolkit

    NASA Astrophysics Data System (ADS)

    Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.

    1998-06-01

    An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.

  5. Results from Navigator GPS Flight Testing for the Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Lulich, Tyler D.; Bamford, William A.; Wintermitz, Luke M. B.; Price, Samuel R.

    2012-01-01

    The recent delivery of the first Goddard Space Flight Center (GSFC) Navigator Global Positioning System (GPS) receivers to the Magnetospheric MultiScale (MMS) mission spacecraft is a high water mark crowning a decade of research and development in high-altitude space-based GPS. Preceding MMS delivery, the engineering team had developed receivers to support multiple missions and mission studies, such as Low Earth Orbit (LEO) navigation for the Global Precipitation Mission (GPM), above the constellation navigation for the Geostationary Operational Environmental Satellite (GOES) proof-of-concept studies, cis-Lunar navigation with rapid re-acquisition during re-entry for the Orion Project and an orbital demonstration on the Space Shuttle during the Hubble Servicing Mission (HSM-4).

  6. An indoor navigation system for the visually impaired.

    PubMed

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  7. Coding of navigational affordances in the human visual system

    PubMed Central

    Epstein, Russell A.

    2017-01-01

    A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669

  8. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  9. Videoexoscopic real-time intraoperative navigation for spinal neurosurgery: a novel co-adaptation of two existing technology platforms, technical note.

    PubMed

    Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph

    2018-06-01

    Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.

  10. Scaling forest phenology from trees to the landscape using an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Klosterman, S.; Melaas, E. K.; Martinez, A.; Richardson, A. D.

    2013-12-01

    Vegetation phenology monitoring has yielded a decades-long archive documenting the impacts of global change on the biosphere. However, the coarse spatial resolution of remote sensing obscures the organismic level processes driving phenology, while point measurements on the ground limit the extent of observation. Unmanned aerial vehicles (UAVs) enable low altitude remote sensing at higher spatial and temporal resolution than available from space borne platforms, and have the potential to elucidate the links between organism scale processes and landscape scale analyses of terrestrial phenology. This project demonstrates the use of a low cost multirotor UAV, equipped with a consumer grade digital camera, for observation of deciduous forest phenology and comparison to ground- and tower-based data as well as remote sensing. The UAV was flown approximately every five days during the spring green-up period in 2013, to obtain aerial photography over an area encompassing a 250m resolution MODIS (Moderate Resolution Imaging Spectroradiometer) pixel at Harvard Forest in central Massachusetts, USA. The imagery was georeferenced and tree crowns were identified using a detailed species map of the study area. Image processing routines were used to extract canopy 'greenness' time series, which were used to calculate phenology transition dates corresponding to early, middle, and late stages of spring green-up for the dominant canopy trees. Aggregated species level phenology estimates from the UAV data, including the mean and variance of phenology transition dates within species in the study area, were compared to model predictions based on visual assessment of a smaller sample size of individual trees, indicating the extent to which limited ground observations represent the larger landscape. At an intermediate scale, the UAV data was compared to data from repeat digital photography, integrating over larger portions of canopy within and near the study area, as a validation step and to see how well tower-based approaches characterize the surrounding landscape. Finally, UAV data was compared to MODIS data to determine how tree crowns within a remote sensing pixel combine to create the aggregate landscape phenology measured by remote sensing, using an area weighted average of the phenology of all dominant crowns.

  11. Corn and sorghum phenotyping using a fixed-wing UAV-based remote sensing system

    NASA Astrophysics Data System (ADS)

    Shi, Yeyin; Murray, Seth C.; Rooney, William L.; Valasek, John; Olsenholler, Jeff; Pugh, N. Ace; Henrickson, James; Bowden, Ezekiel; Zhang, Dongyan; Thomasson, J. Alex

    2016-05-01

    Recent development of unmanned aerial systems has created opportunities in automation of field-based high-throughput phenotyping by lowering flight operational cost and complexity and allowing flexible re-visit time and higher image resolution than satellite or manned airborne remote sensing. In this study, flights were conducted over corn and sorghum breeding trials in College Station, Texas, with a fixed-wing unmanned aerial vehicle (UAV) carrying two multispectral cameras and a high-resolution digital camera. The objectives were to establish the workflow and investigate the ability of UAV-based remote sensing for automating data collection of plant traits to develop genetic and physiological models. Most important among these traits were plant height and number of plants which are currently manually collected with high labor costs. Vegetation indices were calculated for each breeding cultivar from mosaicked and radiometrically calibrated multi-band imagery in order to be correlated with ground-measured plant heights, populations and yield across high genetic-diversity breeding cultivars. Growth curves were profiled with the aerial measured time-series height and vegetation index data. The next step of this study will be to investigate the correlations between aerial measurements and ground truth measured manually in field and from lab tests.

  12. Evaluation of MR scanning, image registration, and image processing methods to visualize cortical veins for neurosurgery

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; Rutten, G. J. M.; Willems, Peter W. A.; Viergever, Max A.

    2000-04-01

    The visualization of brain vessels on the cortex helps the neurosurgeon in two ways: to avoid blood vessels when specifying the trepanation entry, and to overcome errors in the surgical navigation system due to brain shift. We compared 3D T1, MR, 3D T1 MR with gadolinium contrast, MR venography as scanning techniques, mutual information as registration technique, and thresholding and multi-vessel enhancement as image processing techniques. We evaluated the volume rendered results based on their quality and correspondence with photos took during surgery. It appears that with 3D T1 MR scans, gadolinium is required to show cortical veins. The visibility of small cortical veins is strongly enhanced by subtracting a 3D T1 MR baseline scan, which should be registered to the scan with gadolinium contrast, even when the scans are made during the same session. Multi-vessel enhancement helps to clarify the view on small vessels by reducing noise level, but strikingly does not reveal more. MR venography does show intracerebral veins with high detail, but is, as is, unsuited to show cortical veins due to the low contrast with CSF.

  13. Splitting a colon geometry with multiplanar clipping

    NASA Astrophysics Data System (ADS)

    Ahn, David K.; Vining, David J.; Ge, Yaorong; Stelts, David R.

    1998-06-01

    Virtual colonoscopy, a recent three-dimensional (3D) visualization technique, has provided radiologists with a unique diagnostic tool. Using this technique, a radiologist can examine the internal morphology of a patient's colon by navigating through a surface-rendered model that is constructed from helical computed tomography image data. Virtual colonoscopy can be used to detect early forms of colon cancer in a way that is less invasive and expensive compared to conventional endoscopy. However, the common approach of 'flying' through the colon lumen to visually search for polyps is tedious and time-consuming, especially when a radiologist loses his or her orientation within the colon. Furthermore, a radiologist's field of view is often limited by the 3D camera position located inside the colon lumen. We have developed a new technique, called multi-planar geometry clipping, that addresses these problems. Our algorithm divides a complex colon anatomy into several smaller segments, and then splits each of these segments in half for display on a static medium. Multi-planar geometry clipping eliminates virtual colonoscopy's dependence upon expensive, real-time graphics workstations by enabling radiologists to globally inspect the entire internal surface of the colon from a single viewpoint.

  14. 14 CFR 121.349 - Communication and navigation equipment for operations under VFR over routes not navigated by...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment for... § 121.349 Communication and navigation equipment for operations under VFR over routes not navigated by... receiver providing visual and aural signals; and (iii) One ILS receiver; and (3) Any RNAV system used to...

  15. OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.

    PubMed

    Zhou, Guangyan; Xia, Jianguo

    2018-06-07

    Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.

  16. A Video Game Platform for Exploring Satellite and In-Situ Data Streams

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2014-12-01

    Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.

  17. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  18. Damage assessment using advanced non-intrusive inspection methods: integration of space, UAV, GPR, and field spectroscopy

    NASA Astrophysics Data System (ADS)

    Themistocleous, Kyriacos; Neocleous, Kyriacos; Pilakoutas, Kypros; Hadjimitsis, Diofantos G.

    2014-08-01

    The predominant approach for conducting road condition surveys and analyses is still largely based on extensive field observations. However, visual assessment alone cannot identify the actual extent and severity of damage. New non-invasive and cost-effective non-destructive (NDT) remote sensing technologies can be used to monitor road pavements across their life cycle, including remotely sensed aerial and satellite visual and thermal image (AI) data, Unmanned Aerial Vehicles (UAVs), Spectroscopy and Ground Penetrating Radar (GRP). These non-contact techniques can be used to obtain surface and sub-surface information about damage in road pavements, including the crack depth, and in-depth structural failure. Thus, a smart and cost-effective methodology is required that integrates several of these non-destructive/ no-contact techniques for the damage assessment and monitoring at different levels. This paper presents an overview of how an integration of the above technologies can be used to conduct detailed road condition surveys. The proposed approach can also be used to predict the future needs for road maintenance; this information is proven to be valuable to a strategic decision making tools that optimizes maintenance based on resources and environmental issues.

  19. Uav for Geodata Acquisition in Agricultureal and Forestal Applications

    NASA Astrophysics Data System (ADS)

    Reidelstürz, P.; Schrenk, L.; Littmann, W.

    2011-09-01

    In the field of precision-farming research, solutions are worked out to combine ecological and economical requirements in a harmonic way. Integrating hightech in agricultural machinery, natural differences in the fields (biodiversity) can be detected and considered to economize agricultural resources and to give respect to natural ecological variability at the same time. Using precision farming resources, machining - and labour time can be economized, productivness can be improved, environmental burden can be discharged and documentation of production processes can be improved. To realize precision farming it is essential to make contemporary large scale data of the biodiversity in the field available. In the last years effectual traktor based equipment for real time precision farming applications was developed. Using remote sensing, biomass diversity of the field can be considered while applicating operating ressources economicly. Because these large scale data aquisition depends on expensive tractor based inspections, capable Unmanned Aerial Vehicles (UAVs) could complement or in special situations even replace such tractor based data aquisition needed for the realization of precision farming strategies. The specific advantages and application slots of UAVs seems to be ideal for the usage in the field of precision farming. For example the size of even large agricultural fields in germany can be managed even by smaller UAVs. Data can be captured spontaneously, promptly, in large scale, with less respect of weather conditions. In agricultural regions UAV flights can be arranged in visual range as actually the legislator requires in germany, especially because the use of autopilotsystems in fact is nessecary to assure regular area-wide data without gaps but not to fly in non-visible regions. Also a minimized risk of hazard is given, flying UAVs over deserted agricultural areas. In a first stage CIS GmbH cooperated with "Institute For Flightsystems" of the University of German Armed Forces in Neubiberg/Munich and the well-established precision farming company "Konsultationszentrum Liepen" to develop an applicable UAV for precision farming purposes. Currently Cis GmbH and Technologie Campus Freyung, with intense contact to the „flying robot"- team of DLR Oberpfaffenhofen, collaborate to optimize the existing UAV and to extend the applications from data aquisition for biomass diversity up to detect the water supply situation in agricultural fields, to support pest management systems as much as to check the possibilities to detect bark beetle attacks in european spruce in an early stage of attack (green attack phase) by constructing and integrating further payload modules with different sensors in the existing UAV airframe. Also effective data processing workflows are to be worked out. Actually in the existing UAV autopilotsystem "piccolo" (cloudcaptech) is integrated and also a replaceable payload module is available, carrying a VIS and a NIR camera to calculate maps of NDVI diversity as indicator of biomass diversity. Further modules with a 6 channel multispectral still camera and with a spectrometer are planned. The airframe's wingspan is about 3,45m weighting 4.2 kg, ready to fly. The hand launchable UAV can start from any place in agricultural regions. The wing is configured with flaps, allowing steep approaches and short landings using a „butterfly" brake configuration. In spite of the lightweight configuration the UAV yet proves its worth under windy baltic wether situations by collecting regular sharp images of fields under wind speed up to 15m/s (Beaufort 6 -7). In further projects the development of further payload modules and a user friendly flight planning tool is scheduled considering different payload - and airframe requirements for different precision farming purposes and forest applications. Data processing and workflow will be optimized. Cooperation with further partners to establish UAV systems in agricultural, forest and geodata aquisition is desired.

  20. SUSI 62 A Robust and Safe Parachute Uav with Long Flight Time and Good Payload

    NASA Astrophysics Data System (ADS)

    Thamm, H. P.

    2011-09-01

    In many research areas in the geo-sciences (erosion, land use, land cover change, etc.) or applications (e.g. forest management, mining, land management etc.) there is a demand for remote sensing images of a very high spatial and temporal resolution. Due to the high costs of classic aerial photo campaigns, the use of a UAV is a promising option for obtaining the desired remote sensed information at the time it is needed. However, the UAV must be easy to operate, safe, robust and should have a high payload and long flight time. For that purpose, the parachute UAV SUSI 62 was developed. It consists of a steel frame with a powerful 62 cm3 2- stroke engine and a parachute wing. The frame can be easily disassembled for transportation or to replace parts. On the frame there is a gimbal mounted sensor carrier where different sensors, standard SLR cameras and/or multi-spectral and thermal sensors can be mounted. Due to the design of the parachute, the SUSI 62 is very easy to control. Two different parachute sizes are available for different wind speed conditions. The SUSI 62 has a payload of up to 8 kg providing options to use different sensors at the same time or to extend flight duration. The SUSI 62 needs a runway of between 10 m and 50 m, depending on the wind conditions. The maximum flight speed is approximately 50 km/h. It can be operated in a wind speed of up to 6 m/s. The design of the system utilising a parachute UAV makes it comparatively safe as a failure of the electronics or the remote control only results in the UAV coming to the ground at a slow speed. The video signal from the camera, the GPS coordinates and other flight parameters are transmitted to the ground station in real time. An autopilot is available, which guarantees that the area of investigation is covered at the desired resolution and overlap. The robustly designed SUSI 62 has been used successfully in Europe, Africa and Australia for scientific projects and also for agricultural, forestry and industrial applications.

  1. Fusion of spatio-temporal UAV and proximal sensing data for an agricultural decision support system

    NASA Astrophysics Data System (ADS)

    Katsigiannis, P.; Galanis, G.; Dimitrakos, A.; Tsakiridis, N.; Kalopesas, C.; Alexandridis, T.; Chouzouri, A.; Patakas, A.; Zalidis, G.

    2016-08-01

    Over the last few years, multispectral and thermal remote sensing imagery from unmanned aerial vehicles (UAVs) has found application in agriculture and has been regarded as a means of field data collection and crop condition monitoring source. The integration of information derived from the analysis of these remotely sensed data into agricultural management applications facilitates and aids the stakeholder's decision making. Whereas agricultural decision support systems (DSS) have long been utilised in farming applications, there are still critical gaps to be addressed; as the current approach often neglects the plant's level information and lacks the robustness to account for the spatial and temporal variability of environmental parameters within agricultural systems. In this paper, we demonstrate the use of a custom built autonomous UAV platform in providing critical information for an agricultural DSS. This hexacopter UAV bears two cameras which can be triggered simultaneously and can capture both the visible, near-infrared (VNIR) and the thermal infrared (TIR) wavelengths. The platform was employed for the rapid extraction of the normalized difference vegetation index (NDVI) and the crop water stress index (CWSI) of three different plantations, namely a kiwi, a pomegranate, and a vine field. The simultaneous recording of these two complementary indices and the creation of maps was advantageous for the accurate assessment of the plantation's status. Fusion of UAV and soil scanner system products pinpointed the necessity for adjustment of the irrigation management applied. It is concluded that timely CWSI and NDVI measures retrieved for different crop growing stages can provide additional information and can serve as a tool to support the existing irrigation DSS that had so far been exclusively based on telemetry data from soil and agrometeorological sensors. Additionally, the use of the multi-sensor UAV was found to be beneficial in collecting timely, spatio-temporal information for the fusion with ground-based proximal sensing data. This research work was designed and deployed in the frame of the project "AGRO_LESS: Joint reference strategies for rural activities of reduced inputs".

  2. Monitoring landslide dynamics using timeseries of UAV imagery

    NASA Astrophysics Data System (ADS)

    de Jong, S. M.; Van Beek, L. P.

    2017-12-01

    Landslides are worldwide occurring processes that can have large economic impact and sometimes result in fatalities. Multiple factors are important in landslide processes and can make an area prone to landslide activity. Human factors like drainage and removal of vegetation or land clearing are examples of factors that may cause a landslide. Other environmental factors such as topography and the shear strength of the slope material are more difficult to control. Triggering factors for landslides are typically heavy rainfall events or sometimes by earthquakes or under cutting processes by a river. The collection of data about existing landslides in a given area is important for predicting future landslides in that region. We have setup a monitoring program for landslide using cameras aboard Unmanned Airborne Vehicles. UAV with cameras are able to collect ultra-high resolution images and UAVs can be operated in a very flexible way, they just fit in the back of a car. Here, in this study we used Unmanned Aerial Vehicles to collect a time series of high-resolution images over landslides in France and Australia. The algorithm used to process the UAV images into OrthoMosaics and OrthoDEMs is Structure from Motion (SfM). The process generally results in centimeter precision in the horizontal and vertical direction. Such multi-temporal datasets enable the detection of landslide area, the leading edge slope, temporal patterns and volumetric changes of particular areas of the landslide. We measured and computed surface movement of the landslide using the COSI-Corr image correlation algorithm with ground validation. Our study shows the possibilities of generating accurate Digital Surface Models (DSMs) of landslides using images collected with an Unmanned Aerial Vehicle (UAV). The technique is robust and repeatable such that a substantial time series of datasets can be routinely collected. It is shown that a time-series of UAV images can be used to map landslide movements with centimeter accuracy. It also found that there can be a cyclical nature to the slope of the leading edge of the landslide, suggesting that the steepness of the slope can be used to predict the next forward surge of the leading edge.

  3. Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain

    NASA Technical Reports Server (NTRS)

    Kao, David; Kramer, Marc; Chaderjian, Neal

    2005-01-01

    Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.

  4. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  5. Validating an artificial intelligence human proximity operations system with test cases

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    2013-05-01

    An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.

  6. Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors

    NASA Astrophysics Data System (ADS)

    Lokka, I.; Çöltekin, A.

    2016-06-01

    The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.

  7. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    NASA Astrophysics Data System (ADS)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  8. a Comparison among Different Optimization Levels in 3d Multi-Sensor Models. a Test Case in Emergency Context: 2016 Italian Earthquake

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2017-02-01

    In sudden emergency contexts that affect urban centres and built heritage, the latest Geomatics technique solutions must enable the demands of damage documentation, risk assessment, management and data sharing as efficiently as possible, in relation to the danger condition, to the accessibility constraints of areas and to the tight deadlines needs. In recent times, Unmanned Vehicle System (UAV) equipped with cameras are more and more involved in aerial survey and reconnaissance missions, and they are behaving in a very cost-effective way in the direction of 3D documentation and preliminary damage assessment. More and more UAV equipment with low-cost sensors must become, in the future, suitable in every situation of documentation, but above all in damages and uncertainty frameworks. Rapidity in acquisition times and low-cost sensors are challenging marks, and they could be taken into consideration maybe with time spending processing. The paper will analyze and try to classify the information content in 3D aerial and terrestrial models and the importance of metric and non-metric withdrawable information that should be suitable for further uses, as the structural analysis one. The test area is an experience of Team Direct from Politecnico di Torino in centre Italy, where a strong earthquake occurred in August 2016. This study is carried out on a stand-alone damaged building in Pescara del Tronto (AP), with a multi-sensor 3D survey. The aim is to evaluate the contribution of terrestrial and aerial quick documentation by a SLAM based LiDAR and a camera equipped multirotor UAV, for a first reconnaissance inspection and modelling in terms of level of details, metric and non-metric information.

  9. Evaluation of a technique to simplify area navigation and required navigation performance charts

    DOT National Transportation Integrated Search

    2013-06-30

    Performance based navigation (PBN), an enabler for the Federal Aviation Administration's Next Generation Air Transportation System (NextGEN), supports the design of more precise flight procedures. However, these new procedures can be visually complex...

  10. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  11. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  12. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  13. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  14. Optimal configuration of respiratory navigator gating for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI.

    PubMed

    Hamlet, Sean M; Haggerty, Christopher M; Suever, Jonathan D; Wehner, Gregory J; Andres, Kristin N; Powell, David K; Zhong, Xiaodong; Fornwalt, Brandon K

    2017-03-01

    To determine the optimal respiratory navigator gating configuration for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI. Two-dimensional spiral cine DENSE was performed on a 3 Tesla MRI using two single-navigator configurations (retrospective, prospective) and a combined "dual-navigator" configuration in 10 healthy adults and 20 healthy children. The adults also underwent breathhold DENSE as a reference standard for comparisons. Peak left ventricular strains, signal-to-noise ratio (SNR), and navigator efficiency were compared. Subjects also underwent dual-navigator gating with and without visual feedback to determine the effect on navigator efficiency. There were no differences in circumferential, radial, and longitudinal strains between navigator-gated and breathhold DENSE (P = 0.09-0.95) (as confidence intervals, retrospective: [-1.0%-1.1%], [-7.4%-2.0%], [-1.0%-1.2%]; prospective: [-0.6%-2.7%], [-2.8%-8.3%], [-0.3%-2.9%]; dual: [-1.6%-0.5%], [-8.3%-3.2%], [-0.8%-1.9%], respectively). The dual configuration maintained SNR compared with breathhold acquisitions (16 versus 18, P = 0.06). SNR for the prospective configuration was lower than for the dual navigator in adults (P = 0.004) and children (P < 0.001). Navigator efficiency was higher (P < 0.001) for both retrospective (54%) and prospective (56%) configurations compared with the dual configuration (35%). Visual feedback improved the dual configuration navigator efficiency to 55% (P < 0.001). When quantifying left ventricular strains using spiral cine DENSE MRI, a dual navigator configuration results in the highest SNR in adults and children. In adults, a retrospective configuration has good navigator efficiency without a substantial drop in SNR. Prospective gating should be avoided because it has the lowest SNR. Visual feedback represents an effective option to maintain navigator efficiency while using a dual navigator configuration. 2 J. Magn. Reson. Imaging 2017;45:786-794. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Intraoperative 3-Dimensional Computed Tomography and Navigation in Foot and Ankle Surgery.

    PubMed

    Chowdhary, Ashwin; Drittenbass, Lisca; Dubois-Ferrière, Victor; Stern, Richard; Assal, Mathieu

    2016-09-01

    Computer-assisted orthopedic surgery has developed dramatically during the past 2 decades. This article describes the use of intraoperative 3-dimensional computed tomography and navigation in foot and ankle surgery. Traditional imaging based on serial radiography or C-arm-based fluoroscopy does not provide simultaneous real-time 3-dimensional imaging, and thus leads to suboptimal visualization and guidance. Three-dimensional computed tomography allows for accurate intraoperative visualization of the position of bones and/or navigation implants. Such imaging and navigation helps to further reduce intraoperative complications, leads to improved surgical outcomes, and may become the gold standard in foot and ankle surgery. [Orthopedics.2016; 39(5):e1005-e1010.]. Copyright 2016, SLACK Incorporated.

  16. Interactive Visual Analysis within Dynamic Ocean Models

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  17. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  18. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm

    PubMed Central

    Brayfield, Brad P.

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  19. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation

    PubMed Central

    Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000

  20. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

Top