Science.gov

Sample records for 3d lidar sensor

  1. Lidar on small UAV for 3D mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael; Larsson, Hâkan

    2014-10-01

    Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities. Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive (multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of 20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the

  2. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  3. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board.

    PubMed

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-03-17

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.

  4. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  5. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  6. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  7. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  8. Georeferenced LiDAR 3D Vine Plantation Map Generation

    PubMed Central

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth®, providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes. PMID:22163952

  9. Georeferenced LiDAR 3D vine plantation map generation.

    PubMed

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth(®), providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes.

  10. Georeferenced LiDAR 3D vine plantation map generation.

    PubMed

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth(®), providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes. PMID:22163952

  11. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  12. Evaluation of single photon and Geiger mode Lidar for the 3D Elevation Program

    USGS Publications Warehouse

    Stoker, Jason M.; Abdullah, Qassim; Nayegandhi, Amar; Winehouse, Jayna

    2016-01-01

    Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these new technologies for the 3DEP. While not able to collect data currently to meet USGS lidar base specification, this is partially due to the fact that the specification was written for linear-mode systems specifically. With little effort on part of the manufacturers of the new lidar systems and the USGS Lidar specifications team, data from these systems could soon serve the 3DEP program and its users. Many of the shortcomings noted in this study have been reported to have been corrected or improved upon in the next generation sensors.

  13. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  14. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  15. Future trends of 3D silicon sensors

    NASA Astrophysics Data System (ADS)

    Da Vià, Cinzia; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Haughton, Iain; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Christopher; Kok, Angela; Parker, Sherwood; Pellegrini, Giulio; Povoli, Marco; Tzhnevyi, Vladislav; Watts, Stephen J.

    2013-12-01

    Vertex detectors for the next LHC experiments upgrades will need to have low mass while at the same time be radiation hard and with sufficient granularity to fulfil the physics challenges of the next decade. Based on the gained experience with 3D silicon sensors for the ATLAS IBL project and the on-going developments on light materials, interconnectivity and cooling, this paper will discuss possible solutions to these requirements.

  16. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  17. 3D Vegetation Structure Extraction from Lidar Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ni-Meister, W.

    2006-05-01

    Vegetation structure data are critical not only for biomass estimation and global carbon cycle studies, but also for ecosystem disturbance, species habitat and ecosystem biodiversity studies. However those data are rarely available at the global scale. Multispectral passive remote sensing has shown little success on this direction. The upcoming lidar remote sensing technology shows a great potential to measure vegetation vertical structure data globally. In this study, we present and test a Bayesian Stochastic Inversion (BSI) approach to invert a full canopy Geometric Optical and Radiative Transfer (GORT) model to retrieve 3-D vegetation structure parameters from large footprint (15m-25m diameter) vegetation lidar data. BSI approach allows us to take into account lidar-directly derived structure parameters, such as tree height and the upper and lower bounds of crown height and their uncertainties as the prior knowledge in the inversion. It provides not only the optimal estimates of model parameters, but also their uncertainties. We first assess the accuracy of vegetation structure parameter retrievals from vegetation lidar data through a comprehensive GORT input parameter sensitivity analysis. We calculated the singular value decomposition (SVD) of Jacobian matrix, which contains the partial derivatives of the combined model with respect to all relevant model input parameters and. Our analysis shows that with the prior knowledge of tree height, crown depth and crown shape, lidar waveforms is most sensitive to the tree density, then to the tree size and the least to the foliage area volume density. It indicates that tree density can be retrieved with the most accuracy and then the tree size, the least is the foliage area volume density. We also test the simplified BSI approach through a synthetic experiment. The synthetic lidar waveforms were generated based the vegetation structure data obtained from the Boreal Ecosystem Atmosphere Study (BOREAS). With the exact

  18. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  19. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  20. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-08-10

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m.

  1. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area

    PubMed Central

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  2. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  3. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  4. 3D LIDAR-camera extrinsic calibration using an arbitrary trihedron.

    PubMed

    Gong, Xiaojin; Lin, Ying; Liu, Jilin

    2013-02-01

    This paper presents a novel way to address the extrinsic calibration problem for a system composed of a 3D LIDAR and a camera. The relative transformation between the two sensors is calibrated via a nonlinear least squares (NLS) problem, which is formulated in terms of the geometric constraints associated with a trihedral object. Precise initial estimates of NLS are obtained by dividing it into two sub-problems that are solved individually. With the precise initializations, the calibration parameters are further refined by iteratively optimizing the NLS problem. The algorithm is validated on both simulated and real data, as well as a 3D reconstruction application. Moreover, since the trihedral target used for calibration can be either orthogonal or not, it is very often present in structured environments, making the calibration convenient.

  5. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

  6. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.

  7. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing. PMID:22163926

  8. Advances in animal ecology from 3D ecosystem mapping with LiDAR

    NASA Astrophysics Data System (ADS)

    Davies, A.; Asner, G. P.

    2015-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Although the use of LiDAR data is widespread in vegetation science, it has only recently (< 14 years) been applied to animal ecology. Despite such recent application, LiDAR has enabled new insights in the field and revealed the fundamental importance of 3D ecosystem structure for animals. We reviewed the studies to date that have used LiDAR in animal ecology, synthesising the insights gained. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential than traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. LiDAR technology can be applied to animal ecology studies in a wide variety of environments to answer an impressive array of questions. Drawing on case studies from vastly different groups, termites and lions, we further demonstrate the applicability of LiDAR and highlight new understanding, ranging from habitat preference to predator-prey interactions, that would not have been possible from studies restricted to field based methods. We conclude with discussion of how future studies will benefit by using LiDAR to consider 3D habitat effects in a wider variety of ecosystems and with more taxa to develop a better understanding of animal dynamics.

  9. 3D printing of a multifunctional nanocomposite helical liquid sensor

    NASA Astrophysics Data System (ADS)

    Guo, Shuang-Zhuang; Yang, Xuelu; Heuzey, Marie-Claude; Therriault, Daniel

    2015-04-01

    A multifunctional 3D liquid sensor made of a PLA/MWCNT nanocomposite and shaped as a freeform helical structure was fabricated by solvent-cast 3D printing. The 3D liquid sensor featured a relatively high electrical conductivity, the functionality of liquid trapping due to its helical configuration, and an excellent sensitivity and selectivity even for a short immersion into solvents.A multifunctional 3D liquid sensor made of a PLA/MWCNT nanocomposite and shaped as a freeform helical structure was fabricated by solvent-cast 3D printing. The 3D liquid sensor featured a relatively high electrical conductivity, the functionality of liquid trapping due to its helical configuration, and an excellent sensitivity and selectivity even for a short immersion into solvents. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr00278h

  10. TARGET CHARACTERIZATION IN 3D USING INFRARED LIDAR

    SciTech Connect

    B. FOY; B. MCVEY; R. PETRIN; J. TIEE; C. WILSON

    2001-04-01

    We report examples of the use of a scanning tunable CO{sub 2} laser lidar system in the 9-11 {micro}m region to construct images of vegetation and rocks at ranges of up to 5 km from the instrument. Range information is combined with horizontal and vertical distances to yield an image with three spatial dimensions simultaneous with the classification of target type. Object classification is made possible by the distinct spectral signatures of both natural and man-made objects. Several multivariate statistical methods are used to illustrate the degree of discrimination possible among the natural variability of objects in both spectral shape and amplitude.

  11. The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system

    NASA Astrophysics Data System (ADS)

    Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

    2014-05-01

    An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

  12. Application of an optical 3D sensor for automated disassembling

    NASA Astrophysics Data System (ADS)

    Knackfuss, Peter; Schmidt, Achim

    1996-08-01

    The application of an active vision 3D sensor is described for the development and control of an autonomous intelligent robot cell for the disassembling of end-of-life-vehicle components. The research and development work was done concurrently by three European development teams at different locations. During this phase, the virtual environment was distributed on the local development platforms of these teams. Intermediate development results and 3D sensor data were exchanged through network communication to be mutually tested and verified. The physical environment of the disassembling cell demonstrator and its sensor systems is currently being integrated at the BIBA institute.

  13. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  14. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day. PMID:23609635

  15. 3D campus modeling using LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Yoshii, Satoshi; Funatsu, Yukihiro; Takemata, Kazuya

    2012-10-01

    The importance of having a 3D urban city model is recognized in many applications, such as management offices of risk and disaster, the offices for city planning and developing and others. As an example of urban model, we reconstructed 3D KIT campus manually in this study, by utilizing airborne LiDAR point cloud data. The automatic extraction of building shapes was left in future work.

  16. Vegetation Structure and 3-D Reconstruction of Forests Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.

    2009-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the light returns sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves and trunks or larger branches. Instrument deployments in the New England region in 2007 and 2009 and in the southern Sierra Nevada of California in 2008 provided the opportunity to test the ability of the instrument to retrieve tree diameters, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. In New England in 2007, mean parameters retrieved from five scans located within six 1-ha stand sites match manually-measured parameters with values of R2 = 0.94-0.99. Processing the scans to retrieve leaf area index (LAI) provided values within the range of those retrieved with other optical instruments and hemispherical photography. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. Stand heights, obtained from foliage profiles, were not significantly different from RH100 values observed by the Laser Vegetation Imaging Sensor in 2003. Data from the California 2008 and New England 2009 deployments were still being processed at the time of abstract submission. With further hardware and software development, Echidna® technology will provide rapid and accurate measurements of forest canopy structure that can replace manual field measurements, leading to more rapid and more accurate calibration and validation of structure mapping techniques using airborne and spaceborne remote sensors. Three

  17. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  18. Range walk error correction using prior modeling in photon counting 3D imaging lidar

    NASA Astrophysics Data System (ADS)

    He, Weiji; Chen, Yunfei; Miao, Zhuang; Chen, Qian; Gu, Guohua; Dai, Huidong

    2013-09-01

    A real-time correction method for range walk error in photon counting 3D imaging Lidar is proposed in this paper. We establish the photon detection model and pulse output delay model for GmAPD, which indicates that range walk error in photon counting 3D imaging Lidar is mainly effected by the number of photons during laser echo pulse. A measurable variable - laser pulse response rate is defined as a substitute of the number of photons during laser echo pulse, and the expression of the range walk error with respect to the laser pulse response rate is obtained using priori calibration. By recording photon arrival time distribution, the measurement error of unknown targets is predicted using established range walk error function and the range walk error compensated image is got. Thus real-time correction of the measurement error in photon counting 3D imaging Lidar is implemented. The experimental results show that the range walks error caused by the difference in reflected energy of the target can be effectively avoided without increasing the complexity of photon counting 3D imaging Lidar system.

  19. Fusion of terrestrial LiDAR and tomographic mapping data for 3D karst landform investigation

    NASA Astrophysics Data System (ADS)

    Höfle, B.; Forbriger, M.; Siart, C.; Nowaczinski, E.

    2012-04-01

    Highly detailed topographic information has gained in importance for studying Earth surface landforms and processes. LiDAR has evolved into the state-of-the-art technology for 3D data acquisition on various scales. This multi-sensor system can be operated on several platforms such as airborne LS (ALS), mobile LS (MLS) from moving vehicles or stationary on ground (terrestrial LS, TLS). In karst research the integral investigation of surface and subsurface components of solution depressions (e.g. sediment-filled dolines) is required to gather and quantify the linked geomorphic processes such as sediment flux and limestone dissolution. To acquire the depth of the different subsurface layers, a combination of seismic refraction tomography (SRT) and electrical resistivity tomography (ERT) is increasingly applied. This multi-method approach allows modeling the extension of different subsurface media (i.e. colluvial fill, epikarst zone and underlying basal bedrock). Subsequent fusion of the complementary techniques - LiDAR surface and tomographic subsurface data - first-time enables 3D prospection and visualization as well as quantification of geomorphometric parameters (e.g. depth, volume, slope and aspect). This study introduces a novel GIS-based method for semi-automated fusion of TLS and geophysical data. The study area is located in the Dikti Mountains of East Crete and covers two adjacent dolines. The TLS data was acquired with a Riegl VZ-400 scanner from 12 scan positions located mainly at the doline divide. The scan positions were co-registered using the iterative closest point (ICP) algorithm of RiSCAN PRO. For the digital elevation rasters a resolution of 0.5 m was defined. The digital surface model (DSM) of the study was derived by moving plane interpolation of all laser points (including objects) using the OPALS software. The digital terrain model (DTM) was generated by iteratively "eroding" objects in the DSM by minimum filter, which additionally accounts for

  20. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  1. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  2. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  3. Increased Speed: 3D Silicon Sensors. Fast Current Amplifiers

    SciTech Connect

    Parker, Sherwood; Kok, Angela; Kenney, Christopher; Jarron, Pierre; Hasi, Jasmine; Despeisse, Matthieu; Da Via, Cinzia; Anelli, Giovanni; /CERN

    2012-05-07

    The authors describe techniques to make fast, sub-nanosecond time resolution solid-state detector systems using sensors with 3D electrodes, current amplifiers, constant-fraction comparators or fast wave-form recorders, and some of the next steps to reach still faster results.

  4. Investigation on the contribution of LiDAR data in 3D cadastre

    NASA Astrophysics Data System (ADS)

    Giannaka, Olga; Dimopoulou, Efi; Georgopoulos, Andreas

    2014-08-01

    The existing 2D cadastral systems worldwide cannot provide a proper registration and representation of the land ownership rights, restrictions and responsibilities in a 3D context, which appear in our complex urban environment. Ιn such instances, it may be necessary to consider the development of a 3D Cadastre in which proprietary rights acquire appropriate three-dimensional space both above and below conventional ground level. Such a system should contain the topology and the coordinates of the buildings' outlines and infrastructure. The augmented model can be formed as a full 3D Cadastre, a hybrid Cadastre or a 2D Cadastre with 3D tags. Each country has to contemplate which alternative is appropriate, depending on the specific situation, the legal framework and the available technical means. In order to generate a 3D model for cadastral purposes, a system is required which should be able to exploit and represent 3D data such as LiDAR, a remote sensing technology which acquires three-dimensional point clouds that describe the earth's surface and the objects on it. LiDAR gives a direct representation of objects on the ground surface and measures their coordinates by analyzing the reflecting light. Moreover, it provides very accurate position and height information, although direct information about the objects' geometrical shape is not conveyed. In this study, an experimental implementation of 3D Cadastre using LiDAR data is developed, in order to investigate if this information can satisfy the specifications that are set for the purposes of the Hellenic Cadastre. GIS tools have been used for analyzing DSM and true orthophotos of the study area. The results of this study are presented and evaluated in terms of usability and efficiency.

  5. Terrain surfaces and 3-D landcover classification from small footprint full-waveform lidar data: application to badlands

    NASA Astrophysics Data System (ADS)

    Bretar, F.; Chauve, A.; Bailly, J.-S.; Mallet, C.; Jacome, A.

    2009-01-01

    This article presents the use of new remote sensing data acquired from airborne full-waveform lidar systems. They are active sensors which record altimeter profiles. This paper introduces a set of methodologies for processing these data. These techniques are then applied to a particular landscape, the badlands, but the methodologies are designed to be applied to any other landscape. Indeed, the knowledge of an accurate topography and a landcover classification is a prior knowledge for any hydrological and erosion model. Badlands tend to be the most significant areas of erosion in the world with the highest erosion rate values. Monitoring and predicting erosion within badland mountainous catchments is highly strategic due to the arising downstream consequences and the need for natural hazard mitigation engineering. Additionaly, beyond the altimeter information, full-waveform lidar data are processed to extract intensity and width of echoes. They are related to the target reflectance and geometry. Wa will investigate the relevancy of using lidar-derived Digital Terrain Models (DTMs) and to investigate the potentiality of the intensity and width information for 3-D landcover classification. Considering the novelty and the complexity of such data, they are presented in details as well as guidelines to process them. DTMs are then validated with field measurements. The morphological validation of DTMs is then performed via the computation of hydrological indexes and photo-interpretation. Finally, a 3-D landcover classification is performed using a Support Vector Machine classifier. The introduction of an ortho-rectified optical image in the classification process as well as full-waveform lidar data for hydrological purposes is then discussed.

  6. Helicopter Flight Test of 3-D Imaging Flash LIDAR Technology for Safe, Autonomous, and Precise Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-01-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 micrometer Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GN&C system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of human-made geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in real-time for later reconstruction into Digital Elevation Maps (DEM's).

  7. Qualitative and quantitative comparative analyses of 3D lidar landslide displacement field measurements

    NASA Astrophysics Data System (ADS)

    Haugen, Benjamin D.

    Landslide ground surface displacements vary at all spatial scales and are an essential component of kinematic and hazards analyses. Unfortunately, survey-based displacement measurements require personnel to enter unsafe terrain and have limited spatial resolution. And while recent advancements in LiDAR technology provide the ability remotely measure 3D landslide displacements at high spatial resolution, no single method is widely accepted. A series of qualitative metrics for comparing 3D landslide displacement field measurement methods were developed. The metrics were then applied to nine existing LiDAR techniques, and the top-ranking methods --Iterative Closest Point (ICP) matching and 3D Particle Image Velocimetry (3DPIV) -- were quantitatively compared using synthetic displacement and control survey data from a slow-moving translational landslide in north-central Colorado. 3DPIV was shown to be the most accurate and reliable point cloud-based 3D landslide displacement field measurement method, and the viability of LiDAR-based techniques for measuring 3D motion on landslides was demonstrated.

  8. Radiation hardness tests of highly irradiated full-3D sensors

    NASA Astrophysics Data System (ADS)

    Haughton, Iain; DaVia, Cinzia; Watts, Stephen

    2016-01-01

    Several full-3D silicon sensors (with column electrodes going fully through the bulk) were irradiated up to a fluence of (2.14±0.18)×1016 neq cm-2. An infra-red laser was used to induce a homogeneous signal within each sensor's bulk. The signal degradation, measured as a signal efficiency (signal after irradiation normalised to its value before irradiation) was determined for each fluence. The experimental set-up allowed for monitoring of the beam spot diameter, position and reflection intensity on the sensor's surface. Corrections, dependent on the measured reflection intensity, were made when calculating the signal efficiency. The sensor irradiated to the highest fluence showed a signal efficiency of (50 ± 5) %.

  9. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  10. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    NASA Astrophysics Data System (ADS)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  11. A correction method for range walk error in photon counting 3D imaging LIDAR

    NASA Astrophysics Data System (ADS)

    He, Weiji; Sima, Boyu; Chen, Yunfei; Dai, Huidong; Chen, Qian; Gu, Guohua

    2013-11-01

    A correction method for the range walk error is presented in this paper, which is based on a priori modeling and suitable for the GmAPD photon counting three-dimensional(3D) imaging LIDAR. The range walk error is mainly brought in by the fluctuation in number of photons in the laser echo pulse. In this paper, the priori model of range walk error was established, and the function relationship between the range walk error and the laser pulse response rate was determined using the numerical fitting. With this function, the range walk error of original 3D range image was forecasted and the corresponding compensated image of range walk error was obtained to correct the original 3D range image. The experimental results showed that the correction method could reduce the range walk error effectively, and it is particularly suitable for the case that there are significant differences of material properties or reflection characteristics in the scene.

  12. An omnidirectional 3D sensor with line laser scanning

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Gao, Bingtuan; Liu, Chuande; Wang, Peng; Gao, Shuanglei

    2016-09-01

    An active omnidirectional vision owns the advantages of the wide field of view (FOV) imaging, resulting in an entire 3D environment scene, which is promising in the field of robot navigation. However, the existing omnidirectional vision sensors based on line laser can measure points only located on the optical plane of the line laser beam, resulting in the low-resolution reconstruction. Whereas, to improve resolution, some other omnidirectional vision sensors with the capability of projecting 2D encode pattern from projector and curved mirror. However, the astigmatism property of curve mirror causes the low-accuracy reconstruction. To solve the above problems, a rotating polygon scanning mirror is used to scan the object in the vertical direction so that an entire profile of the observed scene can be obtained at high accuracy, without of astigmatism phenomenon. Then, the proposed method is calibrated by a conventional 2D checkerboard plate. The experimental results show that the measurement error of the 3D omnidirectional sensor is approximately 1 mm. Moreover, the reconstruction of objects with different shapes based on the developed sensor is also verified.

  13. A 3D polarized Monte Carlo LIDAR system simulator for studying effects of cirrus inhomogeneities on CALIOP/CALIPSO measurements

    NASA Astrophysics Data System (ADS)

    Szczap, F.; Cornet, C.; Alqassem, A.; Gour, Y.; C.-Labonnote, L.; Jourdan, O.

    2013-05-01

    To estimate cirrus inhomogeneity effects on the apparent backscatter and on the apparent depolarization ratio measured by CALIOP/CALIPSO, a 3D polarized Monte Carlo LIDAR simulator was developed. Comparisons were done with the fast Hogan's LIDAR simulator. Early results show that clouds inhomogeneous effects seem to be negligible on the apparent backscatter but not on the apparent depolarization ratio.

  14. Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  15. Multidimensional measurement by using 3-D PMD sensors

    NASA Astrophysics Data System (ADS)

    Ringbeck, T.; Möller, T.; Hagebeuker, B.

    2007-06-01

    Optical Time-of-Flight measurement gives the possibility to enhance 2-D sensors by adding a third dimension using the PMD principle. Various applications in the automotive (e.g. pedestrian safety), industrial, robotics and multimedia fields require robust three-dimensional data (Schwarte et al., 2000). These applications, however, all have different requirements in terms of resolution, speed, distance and target characteristics. PMDTechnologies has developed 3-D sensors based on standard CMOS processes that can provide an optimized solution for a wide field of applications combined with high integration and cost-effective production. These sensors are realized in various layout formats from single pixel solutions for basic applications to low, middle and high resolution matrices for applications requiring more detailed data. Pixel pitches ranging from 10 micrometer up to a 300 micrometer or larger can be realized and give the opportunity to optimize the sensor chip depending on the application. One aspect of all optical sensors based on a time-of-flight principle is the necessity of handling background illumination. This can be achieved by various techniques, such as optical filters and active circuits on chip. The sensors' usage of the in-pixel so-called SBI-circuitry (suppression of background illumination) makes it even possible to overcome the effects of bright ambient light. This paper focuses on this technical requirement. In Sect. 2 we will roughly describe the basic operation principle of PMD sensors. The technical challenges related to the system characteristics of an active optical ranging technique are described in Sect. 3, technical solutions and measurement results are then presented in Sect. 4. We finish this work with an overview of actual PMD sensors and their key parameters (Sect. 5) and some concluding remarks in Sect. 6.

  16. Urban 3D GIS From LiDAR and digital aerial images

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Song, C.; Simmers, J.; Cheng, P.

    2004-05-01

    This paper presents a method, which integrates image knowledge and Light Detection And Ranging (LiDAR) point cloud data for urban digital terrain model (DTM) and digital building model (DBM) generation. The DBM is an Object-Oriented data structure, in which each building is considered as a building object, i.e., an entity of the building class. The attributes of each building include roof types, polygons of the roof surfaces, height, parameters describing the roof surfaces, and the LiDAR point array within the roof surfaces. Each polygon represents a roof surface of building. This type of data structure is flexible for adding other building attributes in future, such as texture information and wall information. Using image knowledge extracted, we developed a new method of interpolating LiDAR raw data into grid digital surface model (DSM) with considering the steep discontinuities of buildings. In this interpolation method, the LiDAR data points, which are located in the polygon of roof surfaces, first are determined, and then interpolation via planar equation is employed for grid DSM generation. The basic steps of our research are: (1) edge detection by digital image processing algorithms; (2) complete extraction of the building roof edges by digital image processing and human-computer interactive operation; (3) establishment of DBM; (4) generation of DTM by removing surface objects. Finally, we implement the above functions by MS VC++ programming. The outcome of urban 3D DSM, DTM and DBM is exported into urban database for urban 3D GIS.

  17. Optical design for uniform scanning in MEMS-based 3D imaging lidar.

    PubMed

    Lee, Xiaobao; Wang, Chunhui

    2015-03-20

    This paper proposes a method for the optical system design of uniform scanning in a larger scan field of view (FOV) in 3D imaging lidar. The theoretical formulas are derived for the design scheme. By employing the optical design software ZEMAX, a foldaway uniform scanning optical system based on MEMS has been designed, and the scanning uniformity and spot size of the system on the target plane, perpendicular to optical axis, are analyzed and discussed. Results show that the designed system can scan uniformly within the FOV of 40°×40° with small spot size for the target at distance of about 100 m. PMID:25968504

  18. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  19. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  20. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  1. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  2. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  3. Cordless hand-held optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  4. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  5. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  6. 3D Vegetation Mapping Using UAVSAR, LVIS, and LIDAR Data Acquisition Methods

    NASA Technical Reports Server (NTRS)

    Calderon, Denice

    2011-01-01

    The overarching objective of this ongoing project is to assess the role of vegetation within climate change. Forests capture carbon, a green house gas, from the atmosphere. Thus, any change, whether, natural (e.g. growth, fire, death) or due to anthropogenic activity (e.g. logging, burning, urbanization) may have a significant impact on the Earth's carbon cycle. Through the use of Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) and NASA's Laser Vegetation Imaging Sensor (LVIS), which are airborne Light Detection and Ranging (LIDAR) remote sensing technologies, we gather data to estimate the amount of carbon contained in forests and how the content changes over time. UAVSAR and LVIS sensors were sent all over the world with the objective of mapping out terrain to gather tree canopy height and biomass data; This data is in turn used to correlate vegetation with the global carbon cycle around the world.

  7. Study of City Landscape Heritage Using Lidar Data and 3d-City Models

    NASA Astrophysics Data System (ADS)

    Rubinowicz, P.; Czynska, K.

    2015-04-01

    In contemporary town planning protection of urban landscape is a significant issue. It regards especially those cities, where urban structures are the result of ages of evolution and layering of historical development process. Specific panoramas and other strategic views with historic city dominants can be an important part of the cultural heritage and genius loci. Other hand, protection of such expositions introduces limitations for future based city development. Digital Earth observation techniques creates new possibilities for more accurate urban studies, monitoring of urbanization processes and measuring of city landscape parameters. The paper examines possibilities of application of Lidar data and digital 3D-city models for: a) evaluation of strategic city views, b) mapping landscape absorption limits, and c) determination protection zones, where the urbanization and buildings height should be limited. In reference to this goal, the paper introduces a method of computational analysis of the city landscape called Visual Protection Surface (VPS). The method allows to emulate a virtual surface above the city including protection of a selected strategic views. The surface defines maximum height of buildings in such a way, that no new facility can be seen in any of selected views. The research includes also analyses of the quality of simulations according the form and precision of the input data: airborne Lidar / DSM model and more advanced 3D-city models (incl. semantic of the geometry, like in CityGML format). The outcome can be a support for professional planning of tall building development. Application of VPS method have been prepared by a computer program developed by the authors (C++). Simulations were carried out on an example of the city of Dresden.

  8. Integrating airborne LiDAR dataset and photographic images towards the construction of 3D building model

    NASA Astrophysics Data System (ADS)

    Idris, R.; Latif, Z. A.; Hamid, J. R. A.; Jaafar, J.; Ahmad, M. Y.

    2014-02-01

    A 3D building model of man-made objects is an important tool for various applications such as urban planning, flood mapping and telecommunication. The reconstruction of 3D building models remains difficult. No universal algorithms exist that can extract all objects in an image successfully. At present, advances in remote sensing such as airborne LiDAR (Light Detection and Ranging) technology have changed the conventional method of topographic mapping and increased the interest of these valued datasets towards 3D building model construction. Airborne LiDAR has proven accordingly that it can provide three dimensional (3D) information of the Earth surface with high accuracy. In this study, with the availability of open source software such as Sketch Up, LiDAR datasets and photographic images could be integrated towards the construction of a 3D building model. In order to realize the work an area comprising residential areas situated at Putrajaya in the Klang Valley region, Malaysia, covering an area of two square kilometer was chosen. The accuracy of the derived 3D building model is assessed quantitatively. It is found that the difference between the vertical height (z) of the 3D building models derived from LiDAR dataset and ground survey is approximately ± 0.09 centimeter (cm). For the horizontal component (RMSExy), the accuracy estimates derived for the 3D building models were ± 0.31m. The result also shows that the qualitative assessment of the 3D building models constructed seems feasible for the depiction in the standard of LOD 3 (Level of details).

  9. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  10. Measuring Complete 3D Vegetation Structure With Airborne Waveform Lidar: A Calibration and Validation With Terrestrial Lidar Derived Voxels

    NASA Astrophysics Data System (ADS)

    Hancock, S.; Anderson, K.; Disney, M.; Gaston, K. J.

    2015-12-01

    Accurate measurements of vegetation are vital to understand habitats and their provision of ecosystem services as well as having applications in satellite calibration, weather modelling and forestry. The majority of humans now live in urban areas and so understanding vegetation structure in these very heterogeneous areas is of importance. A number of previous studies have used airborne lidar (ALS) to characterise canopy height and canopy cover, but very few have fully characterised 3D vegetation, including understorey. Those that have either relied on leaf-off scans to allow unattenuated measurement of understorey or else did not validate. A method for creating a detailed voxel map of urban vegetation, in which the surface area of vegetation within a grid of cuboids (1.5m by 1.5m by 25 cm) is defined, from full-waveform ALS is presented. The ALS was processed with deconvolution and attenuation correction methods. The signal processing was calibrated and validated against synthetic waveforms generated from terrestrial laser scanning (TLS) data, taken as "truth". The TLS data was corrected for partial hits and attenuation using a voxel approach and these steps were validated and found to be accurate. The ALS results were benchmarked against the more common discrete return ALS products (produced automatically by the lidar manufacturer's algorithms) and Gaussian decomposition of full-waveform ALS. The true vegetation profile was accurately recreated by deconvolution. Far more detail was captured by the deconvolved waveform than either the discrete return or Gaussian decomposed ALS, particularly detail within the canopy; vital information for understanding habitats. In the paper, we will present the results with a focus on the methodological steps towards generating the voxel model, and the subsequent quantitative calibration and validation of the modelling approach using TLS. We will discuss the implications of the work for complete vegetation canopy descriptions in

  11. Multilayered 3D Lidar image construction using spatial models in a Bayesian framework.

    PubMed

    Hernandez-Marin, Sergio; Wallace, Andrew M; Gibson, Gavin J

    2008-06-01

    Standard 3D imaging systems process only a single return at each pixel from an assumed single opaque surface. However, there are situations when the laser return consists of multiple peaks due to the footprint of the beam impinging on a target with surfaces distributed in depth or with semi-transparent surfaces. If all these returns are processed, a more informative multi-layered 3D image is created. We propose a unified theory of pixel processing for Lidar data using a Bayesian approach that incorporates spatial constraints through a Markov Random Field with a Potts prior model. This allows us to model uncertainty about the underlying spatial process. To palliate some inherent deficiencies of this prior model, we also introduce two proposal distributions, one based on spatial mode jumping, the other on a spatial birth/death process. The different parameters of the several returns are estimated using reversible jump Markov chain Monte Carlo (RJMCMC) techniques in combination with an adaptive strategy of delayed rejection to improve the estimates of the parameters. PMID:18421108

  12. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  13. Processing lidar waveform data for 3D visual assessment of forest environments

    NASA Astrophysics Data System (ADS)

    Pirotti, F.; Guarnieri, A.; Masiero, A.; Vettore, A.; Lingua, E.

    2014-06-01

    The objective of this report is to present and discuss a work-flow for extracting, from full-waveform (FW) lidar data, formats which are compatible with common information systems (GIS) and statistical software packages. Full-waveform, specifically for forestry, got attention from the scientific community because a more in-depth analysis can add valuable information for classification and modelling of related variables (e.g. biomass). In order to assess if this is feasible and if the results are useful, the end-user has to deal with raw datasets from lidar sensors. In this study case we propose and test a work-flow which is implemented through a selfdeveloped software integrating ad-hoc C++ libraries and a graphical user interface for an easier approach by end-users. This software allows the user to add raw FW data and produce several products which can successively be easily imported in GIS or statistical software. To achieve this we used some state-of-the-art methods which have been extensively reported in literature and we discuss results and future developments. Results show that this software package can effectively work as a tool for linking raw FW data with forest-related spatial processing by providing punctual information directly derived from the FW data or area-based aggregated information for a more generalized description of the earth surface.

  14. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  15. Coherent lidar airborne windshear sensor: performance evaluation.

    PubMed

    Targ, R; Kavaya, M J; Huffaker, R M; Bowles, R L

    1991-05-20

    National attention has focused on the critical problem of detecting and avoiding windshear since the crash on 2 Aug. 1985 of a Lockheed L-1011 at Dallas/Fort Worth International Airport. As part of the NASA/FAA National Integrated Windshear Program, we have defined a measurable windshear hazard index that can be remotely sensed from an aircraft, to give the pilot information about the wind conditions he will experience at some later time if he continues along the present flight path. A technology analysis and end-to-end performance simulation measuring signal-to-noise ratios and resulting wind velocity errors for competing coherent laser radar (lidar) systems have been carried out. The results show that a Ho:YAG lidar at a wavelength of 2.1 microm and a CO(2) lidar at 10.6 microm can give the pilot information about the line-of-sight component of a windshear threat from his present position to a region extending 2-4 km in front of the aircraft. This constitutes a warning time of 20-40 s, even in conditions of moderately heavy precipitation. Using these results, a Coherent Lidar Airborne Shear Sensor (CLASS) that uses a Q-switched CO(2) laser at 10.6 microm is being designed and developed for flight evaluation in the fall of 1991.

  16. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  17. A Gaussian Mixture Model-Based Continuous Boundary Detection for 3D Sensor Networks

    PubMed Central

    Chen, Jiehui; Salim, Mariam B.; Matsumoto, Mitsuji

    2010-01-01

    This paper proposes a high precision Gaussian Mixture Model-based novel Boundary Detection 3D (BD3D) scheme with reasonable implementation cost for 3D cases by selecting a minimum number of Boundary sensor Nodes (BNs) in continuous moving objects. It shows apparent advantages in that two classes of boundary and non-boundary sensor nodes can be efficiently classified using the model selection techniques for finite mixture models; furthermore, the set of sensor readings within each sensor node’s spatial neighbors is formulated using a Gaussian Mixture Model; different from DECOMO [1] and COBOM [2], we also formatted a BN Array with an additional own sensor reading to benefit selecting Event BNs (EBNs) and non-EBNs from the observations of BNs. In particular, we propose a Thick Section Model (TSM) to solve the problem of transition between 2D and 3D. It is verified by simulations that the BD3D 2D model outperforms DECOMO and COBOM in terms of average residual energy and the number of BNs selected, while the BD3D 3D model demonstrates sound performance even for sensor networks with low densities especially when the value of the sensor transmission range (r) is larger than the value of Section Thickness (d) in TSM. We have also rigorously proved its correctness for continuous geometric domains and full robustness for sensor networks over 3D terrains. PMID:22163619

  18. Compact, High Energy 2-micron Coherent Doppler Wind Lidar Development for NASA's Future 3-D Winds Measurement from Space

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Koch, Grady; Yu, Jirong; Petros, Mulugeta; Beyon, Jeffrey; Kavaya, Michael J.; Trieu, Bo; Chen, Songsheng; Bai, Yingxin; Petzar, paul; Modlin, Edward A.; Barnes, Bruce W.; Demoz, Belay B.

    2010-01-01

    This paper presents an overview of 2-micron laser transmitter development at NASA Langley Research Center for coherent-detection lidar profiling of winds. The novel high-energy, 2-micron, Ho:Tm:LuLiF laser technology developed at NASA Langley was employed to study laser technology currently envisioned by NASA for future global coherent Doppler lidar winds measurement. The 250 mJ, 10 Hz laser was designed as an integral part of a compact lidar transceiver developed for future aircraft flight. Ground-based wind profiles made with this transceiver will be presented. NASA Langley is currently funded to build complete Doppler lidar systems using this transceiver for the DC-8 aircraft in autonomous operation. Recently, LaRC 2-micron coherent Doppler wind lidar system was selected to contribute to the NASA Science Mission Directorate (SMD) Earth Science Division (ESD) hurricane field experiment in 2010 titled Genesis and Rapid Intensification Processes (GRIP). The Doppler lidar system will measure vertical profiles of horizontal vector winds from the DC-8 aircraft using NASA Langley s existing 2-micron, pulsed, coherent detection, Doppler wind lidar system that is ready for DC-8 integration. The measurements will typically extend from the DC-8 to the earth s surface. They will be highly accurate in both wind magnitude and direction. Displays of the data will be provided in real time on the DC-8. The pulsed Doppler wind lidar of NASA Langley Research Center is much more powerful than past Doppler lidars. The operating range, accuracy, range resolution, and time resolution will be unprecedented. We expect the data to play a key role, combined with the other sensors, in improving understanding and predictive algorithms for hurricane strength and track. 1

  19. 3-D water vapor field in the atmospheric boundary layer observed with scanning differential absorption lidar

    NASA Astrophysics Data System (ADS)

    Späth, Florian; Behrendt, Andreas; Muppa, Shravan Kumar; Metzendorf, Simon; Riede, Andrea; Wulfmeyer, Volker

    2016-04-01

    High-resolution three-dimensional (3-D) water vapor data of the atmospheric boundary layer (ABL) are required to improve our understanding of land-atmosphere exchange processes. For this purpose, the scanning differential absorption lidar (DIAL) of the University of Hohenheim (UHOH) was developed as well as new analysis tools and visualization methods. The instrument determines 3-D fields of the atmospheric water vapor number density with a temporal resolution of a few seconds and a spatial resolution of up to a few tens of meters. We present three case studies from two field campaigns. In spring 2013, the UHOH DIAL was operated within the scope of the HD(CP)2 Observational Prototype Experiment (HOPE) in western Germany. HD(CP)2 stands for High Definition of Clouds and Precipitation for advancing Climate Prediction and is a German research initiative. Range-height indicator (RHI) scans of the UHOH DIAL show the water vapor heterogeneity within a range of a few kilometers up to an altitude of 2 km and its impact on the formation of clouds at the top of the ABL. The uncertainty of the measured data was assessed for the first time by extending a technique to scanning data, which was formerly applied to vertical time series. Typically, the accuracy of the DIAL measurements is between 0.5 and 0.8 g m-3 (or < 6 %) within the ABL even during daytime. This allows for performing a RHI scan from the surface to an elevation angle of 90° within 10 min. In summer 2014, the UHOH DIAL participated in the Surface Atmosphere Boundary Layer Exchange (SABLE) campaign in southwestern Germany. Conical volume scans were made which reveal multiple water vapor layers in three dimensions. Differences in their heights in different directions can be attributed to different surface elevation. With low-elevation scans in the surface layer, the humidity profiles and gradients can be related to different land cover such as maize, grassland, and forest as well as different surface layer

  20. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  1. 3-D modeling of tomato canopies using a high-resolution portable scanning lidar for extracting structural information.

    PubMed

    Hosoi, Fumiki; Nakabayashi, Kazushige; Omasa, Kenji

    2011-01-01

    In the present study, an attempt was made to produce a precise 3D image of a tomato canopy using a portable high-resolution scanning lidar. The tomato canopy was scanned by the lidar from three positions surrounding it. Through the scanning, the point cloud data of the canopy were obtained and they were co-registered. Then, points corresponding to leaves were extracted and converted into polygon images. From the polygon images, leaf areas were accurately estimated with a mean absolute percent error of 4.6%. Vertical profile of leaf area density (LAD) and leaf area index (LAI) could be also estimated by summing up each leaf area derived from the polygon images. Leaf inclination angle could be also estimated from the 3-D polygon image. It was shown that leaf inclination angles had different values at each part of a leaf. PMID:22319403

  2. 3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.

    2012-12-01

    The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.

  3. Terrain surfaces and 3-D landcover classification from small footprint full-waveform lidar data: application to badlands

    NASA Astrophysics Data System (ADS)

    Bretar, F.; Chauve, A.; Bailly, J.-S.; Mallet, C.; Jacome, A.

    2009-08-01

    This article presents the use of new remote sensing data acquired from airborne full-waveform lidar systems for hydrological applications. Indeed, the knowledge of an accurate topography and a landcover classification is a prior knowledge for any hydrological and erosion model. Badlands tend to be the most significant areas of erosion in the world with the highest erosion rate values. Monitoring and predicting erosion within badland mountainous catchments is highly strategic due to the arising downstream consequences and the need for natural hazard mitigation engineering. Additionally, beyond the elevation information, full-waveform lidar data are processed to extract the amplitude and the width of echoes. They are related to the target reflectance and geometry. We will investigate the relevancy of using lidar-derived Digital Terrain Models (DTMs) and the potentiality of the amplitude and the width information for 3-D landcover classification. Considering the novelty and the complexity of such data, they are presented in details as well as guidelines to process them. The morphological validation of DTMs is then performed via the computation of hydrological indexes and photo-interpretation. Finally, a 3-D landcover classification is performed using a Support Vector Machine classifier. The use of an ortho-rectified optical image in the classification process as well as full-waveform lidar data for hydrological purposes is finally discussed.

  4. 3D fiber probe for multi sensor coordinate measurement

    NASA Astrophysics Data System (ADS)

    Ettemeyer, A.

    2011-12-01

    Increasing manufacturing accuracy requirements enforce the development of innovative and highly sensitive measuring tools. Especially for measurement with sub micrometer accuracy, the sensor principle has to be chosen appropriately for each measurement surface. Modern multi sensor coordinate measurements systems allow automatic selection of different sensor heads to measure different areas or properties of a sample. As example, different types of optical sensors as well as tactile sensors can be used with the same machine. In this paper we describe different principles of optical sensors used in multi sensor coordinate measurement systems as well as a new approach for tactile measurement with sub micrometer accuracy. A special fiber probe has been developed. The tip of the fiber probe is formed as a sphere. The lateral position of this sphere is observed by a microscope optics and can be determined to a fraction of a micrometer. Additionally, a novel optical set-up now even allows the determination of the z-position of the fiber tip with sub micrometer accuracy. For this purpose we use an interferometric set-up. The light of laser is coupled into the optical fiber. The light, exiting the fiber tip is collected by a microscope optics and superposed with a reference wave, generated directly from the laser. The result is an interferometric signal which is recorded by the camera and processed by a computer. With this set-up, the zdisplacement of the fiber sphere can be measured with an accuracy of a fraction of the used laser wavelength.

  5. Lidar Sensors for Autonomous Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.; Reisse, Robert A.; Pierrottet, Diego F.

    2013-01-01

    Lidar technology will play an important role in enabling highly ambitious missions being envisioned for exploration of solar system bodies. Currently, NASA is developing a set of advanced lidar sensors, under the Autonomous Landing and Hazard Avoidance (ALHAT) project, aimed at safe landing of robotic and manned vehicles at designated sites with a high degree of precision. These lidar sensors are an Imaging Flash Lidar capable of generating high resolution three-dimensional elevation maps of the terrain, a Doppler Lidar for providing precision vehicle velocity and altitude, and a Laser Altimeter for measuring distance to the ground and ground contours from high altitudes. The capabilities of these lidar sensors have been demonstrated through four helicopter and one fixed-wing aircraft flight test campaigns conducted from 2008 through 2012 during different phases of their development. Recently, prototype versions of these landing lidars have been completed for integration into a rocket-powered terrestrial free-flyer vehicle (Morpheus) being built by NASA Johnson Space Center. Operating in closed-loop with other ALHAT avionics, the viability of the lidars for future landing missions will be demonstrated. This paper describes the ALHAT lidar sensors and assesses their capabilities and impacts on future landing missions.

  6. Test Beam Results of 3D Silicon Pixel Sensors for the ATLAS upgrade

    SciTech Connect

    Grenier, P.; Alimonti, G.; Barbero, M.; Bates, R.; Bolle, E.; Borri, M.; Boscardin, M.; Buttar, C.; Capua, M.; Cavalli-Sforza, M.; Cobal, M.; Cristofoli, A.; Dalla Betta, G.F.; Darbo, G.; Da Via, C.; Devetak, E.; DeWilde, B.; Di Girolamo, B.; Dobos, D.; Einsweiler, K.; Esseni, D.; /Udine U. /INFN, Udine /Calabria U. /INFN, Cosenza /Barcelona, Inst. Microelectron. /Manchester U. /CERN /LBL, Berkeley /INFN, Genoa /INFN, Genoa /Udine U. /INFN, Udine /Oslo U. /ICREA, Barcelona /Barcelona, IFAE /SINTEF, Oslo /SINTEF, Oslo /SLAC /SLAC /Bergen U. /New Mexico U. /Bonn U. /SLAC /Freiburg U. /VTT Electronics, Espoo /Bonn U. /SLAC /Freiburg U. /SLAC /SINTEF, Oslo /Manchester U. /Barcelona, IFAE /Bonn U. /Bonn U. /CERN /Manchester U. /SINTEF, Oslo /Barcelona, Inst. Microelectron. /Calabria U. /INFN, Cosenza /Udine U. /INFN, Udine /Manchester U. /VTT Electronics, Espoo /Glasgow U. /Barcelona, IFAE /Udine U. /INFN, Udine /Hawaii U. /Freiburg U. /Manchester U. /Barcelona, Inst. Microelectron. /CERN /Fond. Bruno Kessler, Povo /Prague, Tech. U. /Trento U. /INFN, Trento /CERN /Oslo U. /Fond. Bruno Kessler, Povo /INFN, Genoa /INFN, Genoa /Bergen U. /New Mexico U. /Udine U. /INFN, Udine /SLAC /Oslo U. /Prague, Tech. U. /Oslo U. /Bergen U. /SUNY, Stony Brook /SLAC /Calabria U. /INFN, Cosenza /Manchester U. /Bonn U. /SUNY, Stony Brook /Manchester U. /Bonn U. /SLAC /Fond. Bruno Kessler, Povo

    2011-08-19

    Results on beam tests of 3D silicon pixel sensors aimed at the ATLAS Insertable-B-Layer and High Luminosity LHC (HL-LHC) upgrades are presented. Measurements include charge collection, tracking efficiency and charge sharing between pixel cells, as a function of track incident angle, and were performed with and without a 1.6 T magnetic field oriented as the ATLAS Inner Detector solenoid field. Sensors were bump bonded to the front-end chip currently used in the ATLAS pixel detector. Full 3D sensors, with electrodes penetrating through the entire wafer thickness and active edge, and double-sided 3D sensors with partially overlapping bias and read-out electrodes were tested and showed comparable performance. Full and partial 3D pixel detectors have been tested, with and without a 1.6T magnetic field, in high energy pion beams at the CERN SPS North Area in 2009. Sensors characteristics have been measured as a function of the beam incident angle and compared to a regular planar pixel device. Overall full and partial 3D devices have similar behavior. Magnetic field has no sizeable effect on 3D performances. Due to electrode inefficiency 3D devices exhibit some loss of tracking efficiency for normal incident tracks but recover full efficiency with tilted tracks. As expected due to the electric field configuration 3D sensors have little charge sharing between cells.

  7. 3D, Flash, Induced Current Readout for Silicon Sensors

    SciTech Connect

    Parker, Sherwood I.

    2014-06-07

    A new method for silicon microstrip and pixel detector readout using (1) 65 nm-technology current amplifers which can, for the first time with silicon microstrop and pixel detectors, have response times far shorter than the charge collection time (2) 3D trench electrodes large enough to subtend a reasonable solid angle at most track locations and so have adequate sensitivity over a substantial volume of pixel, (3) induced signals in addition to, or in place of, collected charge

  8. 3D sensor for indirect ranging with pulsed laser source

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Bellisai, S.; Villa, F.; Scarcella, C.; Bahgat Shehata, A.; Tosi, A.; Padovini, G.; Zappa, F.; Tisa, S.; Durini, D.; Weyers, S.; Brockherde, W.

    2012-10-01

    The growing interest for fast, compact and cost-effective 3D ranging imagers for automotive applications has prompted to explore many different techniques for 3D imaging and to develop new system for this propose. CMOS imagers that exploit phase-resolved techniques provide accurate 3D ranging with no complex optics and are rugged and costeffective. Phase-resolved techniques indirectly measure the round-trip return of the light emitted by a laser and backscattered from a distant target, computing the phase delay between the modulated light and the detected signal. Singlephoton detectors, with their high sensitivity, allow to actively illuminate the scene with a low power excitation (less than 10W with diffused daylight illumination). We report on a 4x4 array of CMOS SPAD (Single Photon Avalanche Diodes) designed in a high-voltage 0.35 μm CMOS technology, for pulsed modulation, in which each pixel computes the phase difference between the laser and the reflected pulse. Each pixel comprises a high-performance 30 μm diameter SPAD, an analog quenching circuit, two 9 bit up-down counters and memories to store data during the readout. The first counter counts the photons detected by the SPAD in a time window synchronous with the laser pulse and integrates the whole echoed signal. The second counter accumulates the number of photon detected in a window shifted with respect to the laser pulse, and acquires only a portion of the reflected signal. The array is readout with a global shutter architecture, using a 100 MHz clock; the maximal frame rate is 3 Mframe/s.

  9. Tactile-optical 3D sensor applying image processing

    NASA Astrophysics Data System (ADS)

    Neuschaefer-Rube, Ulrich; Wissmann, Mark

    2009-01-01

    The tactile-optical probe (so-called fiber probe) is a well-known probe in micro-coordinate metrology. It consists of an optical fiber with a probing element at its end. This probing element is adjusted in the imaging plane of the optical system of an optical coordinate measuring machine (CMM). It can be illuminated through the fiber by a LED. The position of the probe is directly detected by image processing algorithms available in every modern optical CMM and not by deflections at the fixation of the probing shaft. Therefore, the probing shaft can be very thin and flexible. This facilitates the measurement with very small probing forces and the realization of very small probing elements (diameter: down to 10 μm). A limitation of this method is that at present the probe does not have full 3D measurement capability. At the Physikalisch-Technische Bundesanstalt (PTB), several arrangements and measurement principles for a full 3D tactile-optical probe have been implemented and tested successfully in cooperation with Werth-Messtechnik, Giessen, Germany. This contribution provides an overview of the results of these activities.

  10. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  11. Application of Lidar Data and 3D-City Models in Visual Impact Simulations of Tall Buildings

    NASA Astrophysics Data System (ADS)

    Czynska, K.

    2015-04-01

    The paper examines possibilities and limitations of application of Lidar data and digital 3D-city models to provide specialist urban analyses of tall buildings. The location and height of tall buildings is a subject of discussions, conflicts and controversies in many cities. The most important aspect is the visual influence of tall buildings to the city landscape, significant panoramas and other strategic city views. It is an actual issue in contemporary town planning worldwide. Over 50% of high-rise buildings on Earth were built in last 15 years. Tall buildings may be a threat especially for historically developed cities - typical for Europe. Contemporary Earth observation, more and more available Lidar scanning and 3D city models are a new tool for more accurate urban analysis of the tall buildings impact. The article presents appropriate simulation techniques, general assumption of geometric and computational algorithms - available methodologies and individual methods develop by author. The goal is to develop the geometric computation methods for GIS representation of the visual impact of a selected tall building to the structure of large city. In reference to this, the article introduce a Visual Impact Size method (VIS). Presented analyses were developed by application of airborne Lidar / DSM model and more processed models (like CityGML), containing the geometry and it's semantics. Included simulations were carried out on an example of the agglomeration of Berlin.

  12. Simulation of a new 3D imaging sensor for identifying difficult military targets

    NASA Astrophysics Data System (ADS)

    Harvey, Christophe; Wood, Jonathan; Randall, Peter; Watson, Graham; Smith, Gordon

    2008-04-01

    This paper reports the successful application of automatic target recognition and identification (ATR/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for targeting operations from fast jet platforms, and is currently being integrated with an ATR/I suite for demonstration and testing. The sensor has been extensively modelled and a set of high fidelity simulated imagery has been generated using the CAMEO-SIM scene generation software tool. These include a variety of different scenarios (varying range, platform altitude, target orientation and environments), and some 'difficult' targets such as concealed military vehicles. The ATR/I algorithms have been tested on this image set and their performance compared to 2D passive imagery from the airborne trials using a Wescam MX-15 infrared sensor and real-time ATR/I suite. This paper outlines the principles behind the sensor model and the methodology of 3D scene simulation. An overview of the 3D ATR/I programme and algorithms is presented, and the relative performance of the ATR/I against the simulated image set is reported. Comparisons are made to the performance of typical 2D sensors, confirming the benefits of 3D imaging for targeting applications.

  13. Computing and monitoring potential of public spaces by shading analysis using 3d lidar data and advanced image analysis

    NASA Astrophysics Data System (ADS)

    Zwolinski, A.; Jarzemski, M.

    2015-04-01

    The paper regards specific context of public spaces in "shadow" of tall buildings located in European cities. Majority of tall buildings in European cities were built in last 15 years. Tall buildings appear mainly in city centres, directly at important public spaces being viable environment for inhabitants with variety of public functions (open spaces, green areas, recreation places, shops, services etc.). All these amenities and services are under direct impact of extensive shading coming from the tall buildings. The paper focuses on analyses and representation of impact of shading from tall buildings on various public spaces in cities using 3D city models. Computer environment of 3D city models in cityGML standard uses 3D LiDAR data as one of data types for definition of 3D cities. The structure of cityGML allows analytic applications using existing computer tools, as well as developing new techniques to estimate extent of shading coming from high-risers, affecting life in public spaces. These measurable shading parameters in specific time are crucial for proper functioning, viability and attractiveness of public spaces - finally it is extremely important for location of tall buildings at main public spaces in cities. The paper explores impact of shading from tall buildings in different spatial contexts on the background of using cityGML models based on core LIDAR data to support controlled urban development in sense of viable public spaces. The article is prepared within research project 2TaLL: Application of 3D Virtual City Models in Urban Analyses of Tall Buildings, realized as a part of Polish-Norway Grants.

  14. Real-time 3D visualization of volumetric video motion sensor data

    SciTech Connect

    Carlson, J.; Stansfield, S.; Shawver, D.; Flachs, G.M.; Jordan, J.B.; Bao, Z.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to be immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.

  15. Using 3D visual tools with LiDAR for environmental outreach

    NASA Astrophysics Data System (ADS)

    Glenn, N. F.; Mannel, S.; Ehinger, S.; Moore, C.

    2009-12-01

    The project objective is to develop visualizations using light detection and ranging (LiDAR) data and other data sources to increase community understanding of remote sensing data for earth science. These data are visualized using Google Earth and other visualization methods. Final products are delivered to K-12, state, and federal agencies to share with their students and community constituents. Once our partner agencies were identified, we utilized a survey method to better understand their technological abilities and use of visualization products. The final multimedia products include a visualization of LiDAR and well data for water quality mapping in a southeastern Idaho watershed; a tour of hydrologic points of interest in southeastern Idaho visited by thousands of people each year, and post-earthquake features near Borah Peak, Idaho. In addition to the customized multimedia materials, we developed tutorials to encourage our partners to utilize these tools with their own LiDAR and other scientific data.

  16. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing.

    PubMed

    Kesner, Samuel B; Howe, Robert D

    2011-07-21

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range.

  17. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing

    PubMed Central

    Kesner, Samuel B.; Howe, Robert D.

    2011-01-01

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range. PMID:21874102

  18. Beam test results of 3D silicon pixel sensors for future upgrades

    NASA Astrophysics Data System (ADS)

    Nellist, C.; Gligorova, A.; Huse, T.; Pacifico, N.; Sandaker, H.

    2013-12-01

    3D silicon has undergone an intensive beam test programme which has resulted in the successful qualification for the ATLAS Insertable B-Layer (IBL) upgrade project to be installed in 2013-2014. This paper presents selected results from this study with a focus on the final IBL test beam of 2012 where IBL prototype sensors were investigated. 3D devices were studied with 4 GeV positrons at DESY and 120 GeV pions at the SPS at CERN. Measurements include tracking efficiency, charge sharing, time over threshold and cluster size distributions as a function of incident angle for IBL 3D design sensors. Studies of 3D silicon sensors in an anti-proton beam test for the AEgIS experiment are also presented.

  19. 3D-Modeling of Vegetation from Lidar Point Clouds and Assessment of its Impact on Façade Solar Irradiation

    NASA Astrophysics Data System (ADS)

    Peronato, G.; Rey, E.; Andersen, M.

    2016-10-01

    The presence of vegetation can significantly affect the solar irradiation received on building surfaces. Due to the complex shape and seasonal variability of vegetation geometry, this topic has gained much attention from researchers. However, existing methods are limited to rooftops as they are based on 2.5D geometry and use simplified radiation algorithms based on view-sheds. This work contributes to overcoming some of these limitations, providing support for 3D geometry to include facades. Thanks to the use of ray-tracing-based simulations and detailed characterization of the 3D surfaces, we can also account for inter-reflections, which might have a significant impact on façade irradiation. In order to construct confidence intervals on our results, we modeled vegetation from LiDAR point clouds as 3D convex hulls, which provide the biggest volume and hence the most conservative obstruction scenario. The limits of the confidence intervals were characterized with some extreme scenarios (e.g. opaque trees and absence of trees). Results show that uncertainty can vary significantly depending on the characteristics of the urban area and the granularity of the analysis (sensor, building and group of buildings). We argue that this method can give us a better understanding of the uncertainties due to vegetation in the assessment of solar irradiation in urban environments, and therefore, the potential for the installation of solar energy systems.

  20. 3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2012-01-01

    Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079

  1. LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction

    NASA Astrophysics Data System (ADS)

    Abdullah, S. M.; Awrangjeb, M.; Lu, G.

    2014-08-01

    Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.

  2. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  3. Volumetric LiDAR scanning of a wind turbine wake and comparison with a 3D analytical wake model

    NASA Astrophysics Data System (ADS)

    Carbajo Fuertes, Fernando; Porté-Agel, Fernando

    2016-04-01

    A correct estimation of the future power production is of capital importance whenever the feasibility of a future wind farm is being studied. This power estimation relies mostly on three aspects: (1) a reliable measurement of the wind resource in the area, (2) a well-established power curve of the future wind turbines and, (3) an accurate characterization of the wake effects; the latter being arguably the most challenging one due to the complexity of the phenomenon and the lack of extensive full-scale data sets that could be used to validate analytical or numerical models. The current project addresses the problem of obtaining a volumetric description of a full-scale wake of a 2MW wind turbine in terms of velocity deficit and turbulence intensity using three scanning wind LiDARs and two sonic anemometers. The characterization of the upstream flow conditions is done by one scanning LiDAR and two sonic anemometers, which have been used to calculate incoming vertical profiles of horizontal wind speed, wind direction and an approximation to turbulence intensity, as well as the thermal stability of the atmospheric boundary layer. The characterization of the wake is done by two scanning LiDARs working simultaneously and pointing downstream from the base of the wind turbine. The direct LiDAR measurements in terms of radial wind speed can be corrected using the upstream conditions in order to provide good estimations of the horizontal wind speed at any point downstream of the wind turbine. All this data combined allow for the volumetric reconstruction of the wake in terms of velocity deficit as well as turbulence intensity. Finally, the predictions of a 3D analytical model [1] are compared to the 3D LiDAR measurements of the wind turbine. The model is derived by applying the laws of conservation of mass and momentum and assuming a Gaussian distribution for the velocity deficit in the wake. This model has already been validated using high resolution wind-tunnel measurements

  4. Calculation and Update of a 3d Building Model of Bavaria Using LIDAR, Image Matching and Catastre Information

    NASA Astrophysics Data System (ADS)

    Aringer, K.; Roschlaub, R.

    2013-09-01

    The Bavarian State Office for Surveying and Geoinformation has launched a statewide 3D Building Model with standardized roof shapes without textures for all 8.1 million buildings in Bavaria. For acquisition of the 3D Building Model LiDAR-data are used as data basis as well as the building ground plans of the official cadastral map and a list of standardized roof shapes. The data management of the 3D Building Model is carried out by a central database with the usage of a nationwide standardized data model and the data exchange interface CityGML. On the one hand the update of the 3D Building Model for new buildings is done by terrestrial building measurements within the maintenance process of the cadastre. On the other hand the roofs of buildings which were built after the LiDAR flight and which were not measured terrestrially yet, are captured by means of picture-based digital surface-models derived from image-matching of oriented aerial photographs (DSM from image matching).

  5. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  6. Automatic reconstruction of 3D urban landscape by computing connected regions and assigning them an average altitude from LiDAR point cloud image

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2014-10-01

    The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.

  7. Remote Sensing of the 3D Wind and Turbulence Field by Coherent Doppler Lidars for Wind Power Applications

    NASA Astrophysics Data System (ADS)

    Sjöholm, M.; Courtney, M. S.; Enevoldsen, K. M.; Lindelöw, P.; Mann, J.; Mikkelsen, T.

    2008-12-01

    anemometer has already recently provided some initial prospective results of this approach to measure the 3D wind and turbulence field.

  8. Uas Topographic Mapping with Velodyne LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Toth, C.; Grejner-Brzezinska, D.

    2016-06-01

    Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.

  9. A simple, low-cost conductive composite material for 3D printing of electronic sensors.

    PubMed

    Leigh, Simon J; Bradley, Robert J; Purssell, Christopher P; Billson, Duncan R; Hutchins, David A

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes. PMID:23185319

  10. A simple, low-cost conductive composite material for 3D printing of electronic sensors.

    PubMed

    Leigh, Simon J; Bradley, Robert J; Purssell, Christopher P; Billson, Duncan R; Hutchins, David A

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.

  11. A Simple, Low-Cost Conductive Composite Material for 3D Printing of Electronic Sensors

    PubMed Central

    Leigh, Simon J.; Bradley, Robert J.; Purssell, Christopher P.; Billson, Duncan R.; Hutchins, David A.

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes (‘rapid prototyping’) before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term ‘carbomorph’ and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes. PMID:23185319

  12. A convolutional learning system for object classification in 3-D Lidar data.

    PubMed

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  13. Investigation of leakage current and breakdown voltage in irradiated double-sided 3D silicon sensors

    NASA Astrophysics Data System (ADS)

    Dalla Betta, G.-F.; Ayllon, N.; Boscardin, M.; Hoeferkamp, M.; Mattiazzo, S.; McDuff, H.; Mendicino, R.; Povoli, M.; Seidel, S.; Sultan, D. M. S.; Zorzi, N.

    2016-09-01

    We report on an experimental study aimed at gaining deeper insight into the leakage current and breakdown voltage of irradiated double-sided 3D silicon sensors from FBK, so as to improve both the design and the fabrication technology for use at future hadron colliders such as the High Luminosity LHC. Several 3D diode samples of different technologies and layout are considered, as well as several irradiations with different particle types. While the leakage current follows the expected linear trend with radiation fluence, the breakdown voltage is found to depend on both the bulk damage and the surface damage, and its values can vary significantly with sensor geometry and process details.

  14. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  15. 3D UHDTV contents production with 2/3-inch sensor cameras

    NASA Astrophysics Data System (ADS)

    Hamacher, Alaric; Pardeshi, Sunil; Whangboo, Taeg-Keun; Kim, Sang-Il; Lee, Seung-Hyun

    2015-03-01

    Most UHDTV content is presently created using single large CMOS sensor cameras as opposed to 2/3-inch small sensor cameras, which is the standard for HD content. The consequence is a technical incompatibility that does not only affect the lenses and accessories of these cameras, but also the content creation process in 2D and 3D. While UHDTV is generally acclaimed for its superior image quality, the large sensors have introduced new constraints in the filming process. The camera sizes and lens dimensions have also introduced new obstacles for their use in 3D UHDTV production. The recent availability of UHDTV broadcast cameras with traditional 2/3-inch sensors can improve the transition towards UHDTV content creation. The following article will evaluate differences between the large-sensor UHDTV cameras and the 2/3-inch 3 CMOS solution and address 3D-specific considerations, such as possible artifacts like chromatic aberration and diffraction, which can occur when mixing HD and UHD equipment. The article will further present a workflow with solutions for shooting 3D UHDTV content on the basis of the Grass Valley LDX4K compact camera, which is the first available UHDTV camera with 2/3-inch UHDTV broadcast technology.

  16. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  17. How integrating 3D LiDAR data in the dike surveillance protocol: The French case

    NASA Astrophysics Data System (ADS)

    Bretar, F.; Mériaux, P.; Fauchard, C.

    2012-04-01

    carried out. A LiDAR system is able to acquire data on a dike structure of up to 80 km per day, which makes the use of this technique also valuable in case of emergency situations. It provides additional valuable products like precious information on dike slopes and crest or their near environment (river banks, etc.). Moreover, in case of vegetation, LiDAR data makes possible to study hidden structures or defaults from images like the erosion of riverbanks under forestry vegetation. The possibility of studying the vegetation is also of high importance: the development of woody vegetation near or onto the dike is a major risk factor. Surface singularities are often signs of disorder or suspected disorder in the dike itself: for example a subsidence or a sinkhole on a ridge may result from internal erosion collapse. Finally, high resolution topographic data contribute to build specific geomechanical model of the dike that, after incorporating data provided by geophysical and geotechnical surveys, are integrated in the calculations of the structure stability. Integrating the regular use of LiDAR data in the dike surveillance protocol is not yet operational in France. However, the high number of French stakeholders at the national level (on average, there is one stakeholder for only 8-9km of dike !) and the real added value of LiDAR data makes a spatial data infrastructure valuable (webservices for processing the data, consulting and filling the database on the field when performing the local diagnosis)

  18. 3D Scan of Ornamental Column (huabiao) Using Terrestrial LiDAR and Hand-held Imager

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Wang, C.; Xi, X.

    2015-08-01

    In ancient China, Huabiao was a type of ornamental column used to decorate important buildings. We carried out 3D scan of a Huabiao located in Peking University, China. This Huabiao was built no later than 1742. It is carved by white marble, 8 meters in height. Clouds and various postures of dragons are carved on its body. Two instruments were used to acquire the point cloud of this Huabiao, a terrestrial LiDAR (Riegl VZ-1000) and a hand-held imager (Mantis Vision F5). In this paper, the details of the experiment were described, including the differences between these two instruments, such as working principle, spatial resolution, accuracy, instrument dimension and working flow. The point clouds obtained respectively by these two instruments were compared, and the registered point cloud of Huabiao was also presented. These should be of interest and helpful for the research communities of archaeology and heritage.

  19. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  20. Characterizing the influence of surface roughness and inclination on 3D vision sensor performance

    NASA Astrophysics Data System (ADS)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Jackson, Michael R.

    2015-12-01

    This paper reports a methodology to evaluate the performance of 3D scanners, focusing on the influence of surface roughness and inclination on the number of acquired data points and measurement noise. Point clouds were captured of samples mounted on a robotic pan-tilt stage using an Ensenso active stereo 3D scanner. The samples have isotropic texture and range in surface roughness (Ra) from 0.09 to 0.46 μm. By extracting the point cloud quality indicators, point density and standard deviation, at a multitude of inclinations, maps of scanner performance are created. These maps highlight the performance envelopes of the sensor, the aim being to predict and compare scanner performance on real-world surfaces, rather than idealistic artifacts. The results highlight the need to characterize 3D vision sensors by their measurement limits as well as best-case performance, determined either by theoretical calculation or measurements in ideal circumstances.

  1. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    PubMed Central

    El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-01-01

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874

  2. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  3. 3D-FBK Pixel Sensors: Recent Beam Tests Results with Irradiated Devices

    SciTech Connect

    Micelli, A.; Helle, K.; Sandaker, H.; Stugu, B.; Barbero, M.; Hugging, F.; Karagounis, M.; Kostyukhin, V.; Kruger, H.; Tsung, J.W.; Wermes, N.; Capua, M.; Fazio, S.; Mastroberardino, A.; Susinno, G.; Gallrapp, C.; Di Girolamo, B.; Dobos, D.; La Rosa, A.; Pernegger, H.; Roe, S.; /CERN /Prague, Tech. U. /Prague, Tech. U. /Freiburg U. /Freiburg U. /Freiburg U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /Glasgow U. /Glasgow U. /Glasgow U. /Hawaii U. /Barcelona, IFAE /Barcelona, IFAE /LBL, Berkeley /Barcelona, IFAE /LBL, Berkeley /LBL, Berkeley /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /New Mexico U. /New Mexico U. /Oslo U. /Oslo U. /Oslo U. /Oslo U. /Oslo U. /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SUNY, Stony Brook /SUNY, Stony Brook /SUNY, Stony Brook /INFN, Trento /Trento U. /INFN, Trento /Trento U. /INFN, Trento /Trento U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /Barcelona, Inst. Microelectron. /Barcelona, Inst. Microelectron. /Barcelona, Inst. Microelectron. /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /SINTEF, Oslo /SINTEF, Oslo /SINTEF, Oslo /SINTEF, Oslo /VTT Electronics, Espoo /VTT Electronics, Espoo

    2012-04-30

    The Pixel Detector is the innermost part of the ATLAS experiment tracking device at the Large Hadron Collider, and plays a key role in the reconstruction of the primary vertices from the collisions and secondary vertices produced by short-lived particles. To cope with the high level of radiation produced during the collider operation, it is planned to add to the present three layers of silicon pixel sensors which constitute the Pixel Detector, an additional layer (Insertable B-Layer, or IBL) of sensors. 3D silicon sensors are one of the technologies which are under study for the IBL. 3D silicon technology is an innovative combination of very-large-scale integration and Micro-Electro-Mechanical-Systems where electrodes are fabricated inside the silicon bulk instead of being implanted on the wafer surfaces. 3D sensors, with electrodes fully or partially penetrating the silicon substrate, are currently fabricated at different processing facilities in Europe and USA. This paper reports on the 2010 June beam test results for irradiated 3D devices produced at FBK (Trento, Italy). The performance of these devices, all bump-bonded with the ATLAS pixel FE-I3 read-out chip, is compared to that observed before irradiation in a previous beam test.

  4. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  5. A volumetric sensor for real-time 3D mapping and robot navigation

    NASA Astrophysics Data System (ADS)

    Fournier, Jonathan; Ricard, Benoit; Laurendeau, Denis

    2006-05-01

    The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.

  6. Fiber optic vibration sensor for high-power electric machines realized using 3D printing technology

    NASA Astrophysics Data System (ADS)

    Igrec, Bojan; Bosiljevac, Marko; Sipus, Zvonimir; Babic, Dubravko; Rudan, Smiljko

    2016-03-01

    The objective of this work was to demonstrate a lightweight and inexpensive fiber-optic vibration sensor, built using 3D printing technology, for high-power electric machines and similar applications. The working principle is based on modulating the light intensity using a blade attached to a bendable membrane. The sensor prototype was manufactured using PolyJet Matrix technology with DM 8515 Grey 35 Polymer. The sensor shows linear response, expected bandwidth (< 150 Hz), and from our measurements we estimated the damping ratio for used polymer to be ζ ≍ 0.019. The developed prototype is simple to assemble, adjust, calibrate and repair.

  7. A novel sensor system for 3D face scanning based on infrared coded light

    NASA Astrophysics Data System (ADS)

    Modrow, Daniel; Laloni, Claudio; Doemens, Guenter; Rigoll, Gerhard

    2008-02-01

    In this paper we present a novel sensor system for three-dimensional face scanning applications. Its operating principle is based on active triangulation with a color coded light approach. As it is implemented in the near infrared band, the used light is invisible for human perception. Though the proposed sensor is primarily designed for face scanning and biometric applications, its performance characteristics are beneficial for technical applications as well. The acquisition of 3d data is real-time capable, provides accurate and high resolution depthmaps and shows high robustness against ambient light. Hence most of the limiting factors of other sensors for 3d and face scanning applications are eliminated, such as blinding and annoying light patterns, motion constraints and highly restricted scenarios due to ambient light constraints.

  8. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    NASA Astrophysics Data System (ADS)

    Da Vià, C.; Borri, M.; Dalla Betta, G.; Haughton, I.; Hasi, J.; Kenney, C.; Povoli, M.; Mendicino, R.

    2015-04-01

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale.

  9. Photon-counting lidar for aerosol detection and 3D imaging

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Richardson, Jonathan; Garnier, Robert; Ireland, David; Bickmeier, Laura; Siracusa, Christina; Quinn, Patrick

    2009-05-01

    Laser-based remote sensing is undergoing a remarkable advance due to novel technologies developed at MIT Lincoln Laboratory. We have conducted recent experiments that have demonstrated the utility of detecting and imaging low-density aerosol clouds. The Mobile Active Imaging LIDAR (MAIL) system uses a Lincoln Laboratory-developed microchip laser to transmit short pulses at 14-16 kHz Pulse Repetition Frequency (PRF), and a Lincoln Laboratory-developed 32x32 Geiger-mode Avalanche-Photodiode Detector (GmAPD) array for singlephoton counting and ranging. The microchip laser is a frequency-doubled passively Q-Switched Nd:YAG laser providing an average transmitted power of less than 64 milli-Watts. When the avalanche photo-diodes are operated in the Geiger-mode, they are reverse-biased above the breakdown voltage for a time that corresponds to the effective range-gate or range-window of interest. The time-of-flight, and therefore range, is determined from the measured laser transmit time and the digital time value from each pixel. The optical intensity of the received pulse is not measured because the GmAPD is saturated by the electron avalanche. Instead, the reflectivity of the scene, or relative density of aerosols in this case, is determined from the temporally and/or spatially analyzed detection statistics.

  10. A full-spectrum 3D noise-based infrared imaging sensor model

    NASA Astrophysics Data System (ADS)

    Richwine, Robert; Sood, Ashok; Puri, Yash; Heckathorn, Harry; Wilson, Larry; Goldspiel, Jules

    2006-08-01

    This model was developed in matlab with I/O links to excel spreadsheets to add realistic and accurate sensor effects to scene generator or actual sensor/camera images. The model imports scene generator or sensor images, converts these radiance images into electron maps and digital count maps, and modifies these images in accordance with user-defined sensor characteristics such as the response map, the detector dark current map, defective pixel maps, and 3-D noise (temporal and spatial noise). The model provides realistic line-of-sight motion and accurate and dynamic PSF blurring of the images. The sensor model allows for the import of raw nonuniformities in dark current and photoresponse, performs a user-defined two-point nonuniformity correction to calculate gain and offset terms and applies these terms to subsequent scene images. Some of the model's capabilities include the ability to fluctuate or ramp FPA and optics temperatures, or modify the PSF on a frame-by-frame basis. The model also functions as an FPA/sensor performance predictor and an FPA data analysis tool as FPA data frames can be input into the 3-D noise evaluation section of the model. The model was developed to produce realistic infrared images for IR sensors.

  11. Traceable profilometry with a 3D nanopositioning unit and zero indicating sensors in compensation method

    NASA Astrophysics Data System (ADS)

    Hoffmann, J.; Weckenmann, A.

    2005-01-01

    Conventional 3D profilers suffer in their traceability and accuracy from nonlinearities of the 1D sensor (optical or tactile) and different measuring principles in the scanning plane compared to the sensor axis. These problems can be overcome using a traceable calibrated 3D positioning device combined with a probing system of negligible measuring range in compensation method. Drawback: reduced dynamics, because of the necessity of accelerated movement of the object to be measured in z-direction for compensating its varying height. Sensors with negligible measuring range to be used for this approach are an optical fixed focus sensor (SIOS GmbH, Germany) and a self-made scanning tunneling sensor without piezo scanner. The integration into the nanopositioning device is made according to a multisensor CMM with fixed and known positions of the sensors with respect to the machine coordinate system giving the possibility of using one sensor's data for navigating the other one. Main applications can be seen in measurement tasks where outstanding accuracy outrivals the need of high measurement speed, e.g. the calibration of step height and pitch standards for profilometry and also for SPM.

  12. Retrieval of Vegetation Structural Parameters and 3-D Reconstruction of Forest Canopies Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.; Schaaf, C.; Woodcock, C. E.; Jupp, D. L.; Culvenor, D.; Newnham, G.; Lovell, J.

    2010-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud, the lidar also provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the full return waveform sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves, trunks, and branches. Deployments in New England in 2007 and the southern Sierra Nevada of California in 2008 tested the ability of the instrument to retrieve mean tree diameter, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. Parameters retrieved from five scans located within six 1-ha stand sites matched manually-measured parameters with values of R2 = 0.94-0.99 in New England and 0.92-0.95 in the Sierra Nevada. Retrieved leaf area index (LAI) values were similar to those of LAI-2000 and hemispherical photography. In New England, an analysis of variance showed that EVI-retrieved values were not significantly different from other methods (power = 0.84 or higher). In the Sierra, R2 = 0.96 and 0.81 for hemispherical photos and LAI-2000, respectively. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. New England stand heights, obtained from foliage profiles, were not significantly different (power = 0.91) from RH100 values observed by LVIS in 2003. Three-dimensional stand reconstruction identifies one or more “hits” along the pulse path coupled with the peak return of each hit expressed as apparent reflectance. Returns are classified as trunk, leaf, or ground returns based on the shape of the return pulse and its location. These data provide a point

  13. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  14. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  15. Identifying Standing Dead Trees in Forest Areas Based on 3d Single Tree Detection from Full Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yao, W.; Krzystek, P.; Heurich, M.

    2012-07-01

    In forest ecology, a snag refers to a standing, partly or completely dead tree, often missing a top or most of the smaller branches. The accurate estimation of live and dead biomass in forested ecosystems is important for studies of carbon dynamics, biodiversity, and forest management. Therefore, an understanding of its availability and spatial distribution is required. So far, LiDAR remote sensing has been successfully used to assess live trees and their biomass, but studies focusing on dead trees are rare. The paper develops a methodology for retrieving individual dead trees in a mixed mountain forest using features that are derived from small-footprint airborne full waveform LIDAR data. First, 3D coordinates of the laser beam reflections, the pulse intensity and width are extracted by waveform decomposition. Secondly, 3D single trees are detected by an integrated approach, which delineates both dominate tree crowns and understory small trees in the canopy height model (CHM) using the watershed algorithm followed by applying normalized cuts segmentation to merged watershed areas. Thus, single trees can be obtained as 3D point segments associated with waveform-specific features per point. Furthermore, the tree segments are delivered to feature definition process to derive geometric and reflectional features at single tree level, e.g. volume and maximal diameter of crown, mean intensity, gap fraction, etc. Finally, the spanned feature space for the tree segments is forwarded to a binary classifier using support vector machine (SVM) in order to discriminate dead trees from the living ones. The methodology is applied to datasets that have been captured with the Riegl LMSQ560 laser scanner at a point density of 25 points/m2 in the Bavarian Forest National Park, Germany, respectively under leaf-on and leaf-off conditions for Norway spruces, European beeches and Sycamore maples. The classification experiments lead in the best case to an overall accuracy of 73% in a leaf

  16. A sensor skid for precise 3D modeling of production lines

    NASA Astrophysics Data System (ADS)

    Elseberg, J.; Borrmann, D.; Schauer, J.; Nüchter, A.; Koriath, D.; Rautenberg, U.

    2014-05-01

    Motivated by the increasing need of rapid characterization of environments in 3D, we designed and built a sensor skid that automates the work of an operator of terrestrial laser scanners. The system combines terrestrial laser scanning with kinematic laser scanning and uses a novel semi-rigid SLAMmethod. It enables us to digitize factory environments without the need to stop production. The acquired 3D point clouds are precise and suitable to detect objects that collide with items moved along the production line.

  17. Hand/eye calibration of a robot arm with a 3D visual sensor

    NASA Astrophysics Data System (ADS)

    Kim, Min-Young; Cho, Hyungsuck; Kim, Jae H.

    2001-10-01

    Hand/eye calibration is useful in many industrial applications, for instance, grasping objects or reconstructing 3D scenes. The calibration of robot systems with a visual sensor is essentially the calibration of a robot, a sensor, and hand-to-eye relation. This paper describes a new technique for computing 3D position and orientation of a 3D visual sensor system relative to the end effector of a robot manipulator in an eye-on-hand robot configuration. When the position of feature points on a calibration target in sensor coordinates viewed at each robot movement, and the position of these points in world coordinates and the relative robot movement between two robot motions are known, a homogeneous equation of the form AX equals XB can be derived. To obtain the unique solution of X, it is necessary to make two relative robot arm movements and to form a system of two equations of the form: A1X equals XB1 and A2X equals XB2. In this paper, a closed-form solution of this calibration system is derived, and the constraints for existence of a unique solution are described in detail. Test results obtained through a series of simulation show that this technique is a simple, efficient, and accurate method for hand/eye calibration.

  18. Tracking naturally occurring indoor features in 2-D and 3-D with lidar range/amplitude data

    SciTech Connect

    Adams, M.D.; Kerstens, A.

    1998-09-01

    Sensor-data processing for the interpretation of a mobile robot`s indoor environment, and the manipulation of this data for reliable localization, are still some of the most important issues in robotics. This article presents algorithms that determine the true position of a mobile robot, based on real 2-D and 3-D optical range and intensity data. The authors start with the physics of the particular type of sensor used, so that the extraction of reliable and repeatable information (namely, edge coordinates) can be determined, taking into account the noise associated with each range sample and the possibility of optical multiple-path effects. Again, applying the physical model of the sensor, the estimated positions of the mobile robot and the uncertainty in these positions are determined. They demonstrate real experiments using 2-D and 3-D scan data taken in indoor environments. To update the robot`s position reliably, the authors address the problem of matching the information recorded in a scan to, first, an a priori map, and second, to information recorded in previous scans, eliminating the need for an a priori map.

  19. Structured-Light Sensor Using Two Laser Stripes for 3D Reconstruction without Vibrations

    PubMed Central

    Usamentiaga, Rubén; Molleda, Julio; Garcia, Daniel F.

    2014-01-01

    3D reconstruction based on laser light projection is a well-known method that generally provides accurate results. However, when this method is used for inspection in uncontrolled environments, it is greatly affected by vibrations. This paper presents a structured-light sensor based on two laser stripes that provides a 3D reconstruction without vibrations. Using more than one laser stripe provides redundant information than is used to compensate for the vibrations. This work also proposes an accurate calibration process for the sensor based on standard calibration plates. A series of experiments are performed to evaluate the proposed method using a mechanical device that simulates vibrations. Results show excellent performance, with very good accuracy. PMID:25347586

  20. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  1. Design and verification of diffractive optical elements for speckle generation of 3-D range sensors

    NASA Astrophysics Data System (ADS)

    Du, Pei-Qin; Shih, Hsi-Fu; Chen, Jenq-Shyong; Wang, Yi-Shiang

    2016-09-01

    The optical projection using speckles is one of the structured light methods that have been applied to three-dimensional (3-D) range sensors. This paper investigates the design and fabrication of diffractive optical elements (DOEs) for generating the light field with uniformly distributed speckles. Based on the principles of computer generated holograms, the iterative Fourier transform algorithm was adopted for the DOE design. It was used to calculate the phase map for diffracting the incident laser beam into a goal pattern with distributed speckles. Four patterns were designed in the study. Their phase maps were first examined by a spatial light modulator and then fabricated on glass substrates by microfabrication processes. Finally, the diffraction characteristics of the fabricated devices were verified. The experimental results show that the proposed methods are applicable to the DOE design of 3-D range sensors. Furthermore, any expected diffraction area and speckle density could be possibly achieved according to the relations presented in the paper.

  2. 3D monolithically stacked CMOS Active Pixel Sensors for particle position and direction measurements

    NASA Astrophysics Data System (ADS)

    Servoli, L.; Passeri, D.; Morozzi, A.; Magalotti, D.; Piperku, L.

    2015-01-01

    In this work we propose a 3D monolithically stacked, multi-layer detectors based on CMOS Active Pixel Sensors (APS) layers which allows at the same time accurate estimation of the impact point and of the incidence angle an ionizing particle. The whole system features two fully-functional CMOS APS matrix detectors, including both sensing area and control/signal elaboration circuitry, stacked in a monolithic device by means of Through Silicon Via (TSV) connections thanks to the capabilities of the CMOS vertical scale integration (3D-IC) 130 nm Chartered/Tezzaron technology. In order to evaluate the suitability of the two layer monolithic active pixel sensor system to reconstruct particle tracks, tests with proton beams have been carried out at the INFN LABEC laboratories in Florence (Italy) with 3 MeV proton beam.

  3. Nodes Localization in 3D Wireless Sensor Networks Based on Multidimensional Scaling Algorithm

    PubMed Central

    2014-01-01

    In the recent years, there has been a huge advancement in wireless sensor computing technology. Today, wireless sensor network (WSN) has become a key technology for different types of smart environment. Nodes localization in WSN has arisen as a very challenging problem in the research community. Most of the applications for WSN are not useful without a priory known nodes positions. Adding GPS receivers to each node is an expensive solution and inapplicable for indoor environments. In this paper, we implemented and evaluated an algorithm based on multidimensional scaling (MDS) technique for three-dimensional (3D) nodes localization in WSN using improved heuristic method for distance calculation. Using extensive simulations we investigated our approach regarding various network parameters. We compared the results from the simulations with other approaches for 3D-WSN localization and showed that our approach outperforms other techniques in terms of accuracy. PMID:27437480

  4. A 3D Model of the Thermoelectric Microwave Power Sensor by MEMS Technology.

    PubMed

    Yi, Zhenxiang; Liao, Xiaoping

    2016-01-01

    In this paper, a novel 3D model is proposed to describe the temperature distribution of the thermoelectric microwave power sensor. In this 3D model, the heat flux density decreases from the upper surface to the lower surface of the GaAs substrate while it was supposed to be a constant in the 2D model. The power sensor is fabricated by a GaAs monolithic microwave integrated circuit (MMIC) process and micro-electro-mechanical system (MEMS) technology. The microwave performance experiment shows that the S11 is less than -26 dB over the frequency band of 1-10 GHz. The power response experiment demonstrates that the output voltage increases from 0 mV to 27 mV, while the incident power varies from 1 mW to 100 mW. The measured sensitivity is about 0.27 mV/mW, and the calculated result from the 3D model is 0.28 mV/mW. The relative error has been reduced from 7.5% of the 2D model to 3.7% of the 3D model. PMID:27338395

  5. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate. PMID:25375758

  6. A 3D Model of the Thermoelectric Microwave Power Sensor by MEMS Technology.

    PubMed

    Yi, Zhenxiang; Liao, Xiaoping

    2016-01-01

    In this paper, a novel 3D model is proposed to describe the temperature distribution of the thermoelectric microwave power sensor. In this 3D model, the heat flux density decreases from the upper surface to the lower surface of the GaAs substrate while it was supposed to be a constant in the 2D model. The power sensor is fabricated by a GaAs monolithic microwave integrated circuit (MMIC) process and micro-electro-mechanical system (MEMS) technology. The microwave performance experiment shows that the S11 is less than -26 dB over the frequency band of 1-10 GHz. The power response experiment demonstrates that the output voltage increases from 0 mV to 27 mV, while the incident power varies from 1 mW to 100 mW. The measured sensitivity is about 0.27 mV/mW, and the calculated result from the 3D model is 0.28 mV/mW. The relative error has been reduced from 7.5% of the 2D model to 3.7% of the 3D model.

  7. A 3D Model of the Thermoelectric Microwave Power Sensor by MEMS Technology

    PubMed Central

    Yi, Zhenxiang; Liao, Xiaoping

    2016-01-01

    In this paper, a novel 3D model is proposed to describe the temperature distribution of the thermoelectric microwave power sensor. In this 3D model, the heat flux density decreases from the upper surface to the lower surface of the GaAs substrate while it was supposed to be a constant in the 2D model. The power sensor is fabricated by a GaAs monolithic microwave integrated circuit (MMIC) process and micro-electro-mechanical system (MEMS) technology. The microwave performance experiment shows that the S11 is less than −26 dB over the frequency band of 1–10 GHz. The power response experiment demonstrates that the output voltage increases from 0 mV to 27 mV, while the incident power varies from 1 mW to 100 mW. The measured sensitivity is about 0.27 mV/mW, and the calculated result from the 3D model is 0.28 mV/mW. The relative error has been reduced from 7.5% of the 2D model to 3.7% of the 3D model. PMID:27338395

  8. An approach for the calibration of a combined RGB-sensor and 3D-camera device

    NASA Astrophysics Data System (ADS)

    Schulze, M.

    2011-07-01

    The elds of application for 3d cameras are very dierent, because high image frequency and determination of 3d data. Often, 3d cameras are used for mobile robotic. They are used for obstacle detection or object recognition. So they also are interesting for applications in agriculture, in combination with mobile robots. Here, in addition to 3d data, there is often a necessity to get color information for each 3d point. Unfortunately, 3d cameras do not capture any color information. Therefore, an additional sensor is necessary, such as RGB plus possibly NIR. To combine data of two dierent sensors a reference to each other, via calibration, is important. This paper presents several calibration methods and discuss their accuracy potential. Based on a spatial resection, the algorithm determines the translation and rotation between the two sensors and the inner orientation of the used sensor.

  9. Design of 3D measurement system based on multi-sensor data fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Weiguang; Han, Jun; Yu, Xun

    2009-05-01

    With the rapid development of shape measurement technique, multi-sensor approach becomes one of valid way to improve the accuracy, to expend measuring range, to reduce occlusion, to realize multi-resolution measurement, and to increase measuring speed simultaneously. Sensors in multi-sensor system can have different system parameters, and they may have different measuring range and different precision. Light sectioning method is one of useful measurement technique for 3D profile measurement. It is insensitive to the surface optical property of 3D object, has scarcely any demand on surrounding. A multi-sensor system scheme, which uses light sectioning method and multi-sensor data fusion techniques, is presented for blade of aviation engine and spiral bevel gear measurement. The system model is developed to build the relationship between measuring range & precision and system parameters. The system parameters were set according to system error analysis, measuring range and precision. The result shows that the system is more universal than it's ancestor, and that the accuracy of the system is about 0.05mm for the 60× 60mm2 measuring range, and that the system is successful for the aero-dynamical data curve of blade of aviation engine and tooth profile of spiral bevel gear measurement with 3600 multi-resolution measuring character.

  10. NASA DC-8 Airborne Scanning Lidar Sensor Development

    NASA Technical Reports Server (NTRS)

    Nielsen, Norman B.; Uthe, Edward E.; Kaiser, Robert D.; Tucker, Michael A.; Baloun, James E.; Gorordo, Javier G.

    1996-01-01

    The NASA DC-8 aircraft is used to support a variety of in-situ and remote sensors for conducting environmental measurements over global regions. As part of the atmospheric effects of aviation program (AEAP) the DC-8 is scheduled to conduct atmospheric aerosol and gas chemistry and radiation measurements of subsonic aircraft contrails and cirrus clouds. A scanning lidar system is being developed for installation on the DC-8 to support and extend the domain of the AEAP measurements. Design and objectives of the DC-8 scanning lidar are presented.

  11. NASA DC-8 airborne scanning LIDAR sensor development

    SciTech Connect

    Nielsen, N.B.; Uthe, E.E.; Kaiser, R.D.

    1996-11-01

    The NASA DC-8 aircraft is used to support a variety of in-situ and remote sensors for conducting environmental measurements over global regions. As part of the atmospheric effects of aviation program (AEAP) the DC-8 is scheduled to conduct atmospheric aerosol and gas chemistry and radiation measurements of subsonic aircraft contrails and cirrus clouds. A scanning lidar system is being developed for installation on the DC-8 to support and extend the domain of the AEAP measurements. Design and objectives of the DC-8 scanning lidar are presented. 4 figs.

  12. Integration of GPR and Laser Position Sensors for Real-Time 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Grasmueck, M.; Viggiano, D.

    2005-05-01

    Non-invasive 3D imaging visualizes anatomy and contents inside objects. Such tools are a commodity for medical doctors diagnosing a patient's health without scalpel and airport security staff inspecting the contents of baggage without opening. For geologists, hydrologists, archeologists and engineers wanting to see inside the shallow subsurface, such 3D tools are still a rarity. Theory and practice show that full-resolution 3D Ground Penetrating Radar (GPR) imaging requires unaliased recording of dipping reflections and diffractions. For a heterogeneous subsurface, minimum grid spacing of GPR measurements should be at least quarter wavelength or less in all directions. Consequently, positioning precision needs to be better than eighth wavelength for correct grid point assignment. Until now 3D GPR imaging has not been practical: data acquisition and processing took weeks to months, data analysis required geophysical training with no versatile 3D systems commercially available. We have integrated novel rotary laser positioning technology with GPR into a highly efficient and simple to use 3D imaging system. The laser positioning enables acquisition of centimeter accurate x, y, and z coordinates from multiple small detectors attached to moving GPR antennae. Positions streaming with 20 updates/second from each detector are fused in real-time with the GPR data. We developed software for automated data acquisition and real-time 3D GPR data quality control on slices at selected depths. Standard formatted (SEGY) data cubes and animations are generated within an hour after the last trace has been acquired. Examples can be seen at www.3dgpr.info. Such instant 3D GPR can be used as an on-site imaging tool supporting field work, hypothesis testing, and optimal sample collection. Rotary laser positioning has the flexibility to be integrated with multiple moving GPR antennae and other geophysical sensors enabling simple and efficient high resolution 3D data acquisition at

  13. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  14. Recent development of 3D imaging laser sensor in Mitsubishi Electric Corporation

    NASA Astrophysics Data System (ADS)

    Imaki, M.; Kotake, N.; Tsuji, H.; Hirai, A.; Kameyama, S.

    2013-09-01

    We have been developing 3-D imaging laser sensors for several years, because they can acquire the additional information of the scene, i.e. the range data. It enhances the potential to detect unwanted people and objects, the sensors can be utilized for applications such as safety control and security surveillance, and so forth. In this paper, we focus on two types of our sensors, which are high-frame-rate type and compact-type. To realize the high-frame-rate type system, we have developed two key devices: the linear array receiver which has 256 single InAlAs-APD detectors and the read-out IC (ROIC) array which is fabricated in SiGe-BiCMOS process, and they are connected electrically to each other. Each ROIC measures not only the intensity, but also the distance to the scene by high-speed analog signal processing. In addition, by scanning the mirror mechanically in perpendicular direction to the linear image receiver, we have realized the high speed operation, in which the frame rate is over 30 Hz and the number of pixels is 256 x 256. In the compact-type 3-D imaging laser sensor development, we have succeeded in downsizing the transmitter by scanning only the laser beam with a two-dimensional MEMS scanner. To obtain wide fieldof- view image, as well as the angle of the MEMS scanner, the receiving optical system and the large area receiver are needed. We have developed the large detecting area receiver that consists of 32 rectangular detectors, where the output signals of each detector are summed up. In this phase, our original circuit evaluates each signal level, removes the low-level signals, and sums them, in order to improve the signalto- noise ratio. In the following paper, we describe the system configurations and the recent experimental results of the two types of our 3-D imaging laser sensors.

  15. 3D heterogeneous sensor system on a chip for defense and security applications

    NASA Astrophysics Data System (ADS)

    Bhansali, Shekhar; Chapman, Glenn H.; Friedman, Eby G.; Ismail, Yehea; Mukund, P. R.; Tebbe, Dennis; Jain, Vijay K.

    2004-09-01

    This paper describes a new concept for ultra-small, ultra-compact, unattended multi-phenomenological sensor systems for rapid deployment, with integrated classification-and-decision-information extraction capability from a sensed environment. We discuss a unique approach, namely a 3-D Heterogeneous System on a Chip (HSoC) in order to achieve a minimum 10X reduction in weight, volume, and power and a 10X or greater increase in capability and reliability -- over the alternative planar approaches. These gains will accrue from (a) the avoidance of long on-chip interconnects and chip-to-chip bonding wires, and (b) the cohabitation of sensors, preprocessing analog circuitry, digital logic and signal processing, and RF devices in the same compact volume. A specific scenario is discussed in detail wherein a set of four types of sensors, namely an array of acoustic and seismic sensors, an active pixel sensor array, and an uncooled IR imaging array are placed on a common sensor plane. The other planes include an analog plane consisting of transductors and A/D converters. The digital processing planes provide the necessary processing and intelligence capability. The remaining planes provide for wireless communications/networking capability. When appropriate, this processing and decision-making will be accomplished on a collaborative basis among the distributed sensor nodes through a wireless network.

  16. Dynamic 3-D chemical agent cloud mapping using a sensor constellation deployed on mobile platforms

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Konno, Daisei; Rossi, David; Marinelli, William J.; Seem, Pete

    2014-05-01

    The need for standoff detection technology to provide early Chem-Bio (CB) threat warning is well documented. Much of the information obtained by a single passive sensor is limited to bearing and angular extent of the threat cloud. In order to obtain absolute geo-location, range to threat, 3-D extent and detailed composition of the chemical threat, fusion of information from multiple passive sensors is needed. A capability that provides on-the-move chemical cloud characterization is key to the development of real-time Battlespace Awareness. We have developed, implemented and tested algorithms and hardware to perform the fusion of information obtained from two mobile LWIR passive hyperspectral sensors. The implementation of the capability is driven by current Nuclear, Biological and Chemical Reconnaissance Vehicle operational tactics and represents a mission focused alternative of the already demonstrated 5-sensor static Range Test Validation System (RTVS).1 The new capability consists of hardware for sensor pointing and attitude information which is made available for streaming and aggregation as part of the data fusion process for threat characterization. Cloud information is generated using 2-sensor data ingested into a suite of triangulation and tomographic reconstruction algorithms. The approaches are amenable to using a limited number of viewing projections and unfavorable sensor geometries resulting from mobile operation. In this paper we describe the system architecture and present an analysis of results obtained during the initial testing of the system at Dugway Proving Ground during BioWeek 2013.

  17. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  18. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  19. Distributed network of integrated 3D sensors for transportation security applications

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Garcia, Fred

    2009-05-01

    The US Port Security Agency has strongly emphasized the needs for tighter control at transportation hubs. Distributed arrays of miniature CMOS cameras are providing some solutions today. However, due to the high bandwidth required and the low valued content of such cameras (simple video feed), large computing power and analysis algorithms as well as control software are needed, which makes such an architecture cumbersome, heavy, slow and expensive. We present a novel technique by integrating cheap and mass replicable stealth 3D sensing micro-devices in a distributed network. These micro-sensors are based on conventional structures illumination via successive fringe patterns on the object to be sensed. The communication bandwidth between each sensor remains very small, but is of very high valued content. Key technologies to integrate such a sensor are digital optics and structured laser illumination.

  20. Nonthreshold-based event detection for 3d environment monitoring in sensor networks

    SciTech Connect

    Li, M.; Liu, Y.H.; Chen, L.

    2008-12-15

    Event detection is a crucial task for wireless sensor network applications, especially environment monitoring. Existing approaches for event detection are mainly based on some predefined threshold values and, thus, are often inaccurate and incapable of capturing complex events. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds but some complex pattern in the full-scale view of the environmental data. To address this issue, we propose a nonthreshold-based approach for the real 3D sensor monitoring environment. We employ energy-efficient methods to collect a time series of data maps from the sensor network and detect complex events through matching the gathered data to spatiotemporal data patterns. Finally, we conduct trace-driven simulations to prove the efficacy and efficiency of this approach on detecting events of complex phenomena from real-life records.

  1. Package analysis of 3D-printed piezoresistive strain gauge sensors

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Baptist, Joshua R.; Sahasrabuddhe, Ritvij; Lee, Woo H.; Popa, Dan O.

    2016-05-01

    Poly(3,4-ethyle- nedioxythiophene)-poly(styrenesulfonate) or PEDOT:PSS is a flexible polymer which exhibits piezo-resistive properties when subjected to structural deformation. PEDOT:PSS has a high conductivity and thermal stability which makes it an ideal candidate for use as a pressure sensor. Applications of this technology includes whole body robot skin that can increase the safety and physical collaboration of robots in close proximity to humans. In this paper, we present a finite element model of strain gauge touch sensors which have been 3D-printed onto Kapton and silicone substrates using Electro-Hydro-Dynamic ink-jetting. Simulations of the piezoresistive and structural model for the entire packaged sensor was carried out using COMSOLR , and compared with experimental results for validation. The model will be useful in designing future robot skin with predictable performances.

  2. Multi-sourced, 3D geometric characterization of volcanogenic karst features: Integrating lidar, sonar, and geophysical datasets (Invited)

    NASA Astrophysics Data System (ADS)

    Sharp, J. M.; Gary, M. O.; Reyes, R.; Halihan, T.; Fairfield, N.; Stone, W. C.

    2009-12-01

    Karstic aquifers can form very complex hydrogeological systems and 3-D mapping has been difficult, but Lidar, phased array sonar, and improved earth resistivity techniques show promise in this and in linking metadata to models. Zacatón, perhaps the Earth’s deepest cenote, has a sub-aquatic void space exceeding 7.5 x 106 cubic m3. It is the focus of this study which has created detailed 3D maps of the system. These maps include data from above and beneath the the water table and within the rock matrix to document the extent of the immense karst features and to interpret the geologic processes that formed them. Phase 1 used high resolution (20 mm) Lidar scanning of surficial features of four large cenotes. Scan locations, selected to achieve full feature coverage once registered, were established atop surface benchmarks with UTM coordinates established using GPS and Total Stations. The combined datasets form a geo-registered mesh of surface features down to water level in the cenotes. Phase 2 conducted subsurface imaging using Earth Resistivity Imaging (ERI) geophysics. ERI identified void spaces isolated from open flow conduits. A unique travertine morphology exists in which some cenotes are dry or contain shallow lakes with flat travertine floors; some water-filled cenotes have flat floors without the cone of collapse material; and some have collapse cones. We hypothesize that the floors may have large water-filled voids beneath them. Three separate flat travertine caps were imaged: 1) La Pilita, which is partially open, exposing cap structure over a deep water-filled shaft; 2) Poza Seca, which is dry and vegetated; and 3) Tule, which contains a shallow (<1 m) lake. A fourth line was run adjacent to cenote Verde. La Pilita ERI, verified by SCUBA, documented the existence of large water-filled void zones ERI at Poza Seca showed a thin cap overlying a conductive zone extending to at least 25 m depth beneath the cap with no lower boundary of this zone evident

  3. Constraints on 3D fault and fracture distribution in layered volcanic- volcaniclastic sequences from terrestrial LIDAR datasets: Faroe Islands

    NASA Astrophysics Data System (ADS)

    Raithatha, Bansri; McCaffrey, Kenneth; Walker, Richard; Brown, Richard; Pickering, Giles

    2013-04-01

    Hydrocarbon reservoirs commonly contain an array of fine-scale structures that control fluid flow in the subsurface, such as polyphase fracture networks and small-scale fault zones. These structures are unresolvable using seismic imaging and therefore outcrop-based studies have been used as analogues to characterize fault and fracture networks and assess their impact on fluid flow in the subsurface. To maximize recovery and enhance production, it is essential to understand the geometry, physical properties, and distribution of these structures in 3D. Here we present field data and terrestrial LIDAR-derived 3D, photo-realistic virtual outcrops of fault zones at a range of displacement scales (0.001- 4.5 m) within a volcaniclastic sand- and basaltic lava unit sequence in the Faroe Islands. Detailed field observations were used to constrain the virtual outcrop dataset, and a workflow has been developed to build a discrete fracture network (DFN) models in GOCAD® from these datasets. Model construction involves three main stages: (1) Georeferencing and processing of LIDAR datasets; (2) Structural interpretation to discriminate between faults, fractures, veins, and joint planes using CAD software and RiSCAN Pro; and (3) Building a 3D DFN in GOCAD®. To test the validity of this workflow, we focus here on a 4.5 m displacement strike-slip fault zone that displays a complex polymodal fracture network in the inter-layered basalt-volcaniclastic sequence, which is well-constrained by field study. The DFN models support our initial field-based hypothesis that fault zone geometry varies with increasing displacement through volcaniclastic units. Fracture concentration appears to be greatest in the upper lava unit, decreases into the volcaniclastic sediments, and decreases further into the lower lava unit. This distribution of fractures appears to be related to the width of the fault zone and the amount of fault damage on the outcrop. For instance, the fault zone is thicker in

  4. Knowledge-based system for computer-aided process planning of laser sensor 3D digitizing

    NASA Astrophysics Data System (ADS)

    Bernard, Alain; Davillerd, Stephane; Sidot, Benoit

    1999-11-01

    This paper introduces some results of a research work carried out on the automation of digitizing process of complex part using a precision 3D-laser sensor. Indeed, most of the operations are generally still manual to perform digitalization. In fact, redundancies, lacks or forgetting in point acquisition are possible. Moreover, digitization time of a part, i.e. immobilization of the machine, is thus not optimized overall. So, it is important, for time- compression during product development, to minimize time consuming of reverse engineering step. A new way to scan automatically a complex 3D part is presented to order to measure and to compare the acquired data with the reference CAD model. After introducing digitization, the environment used for the experiments is presented, based on a CMM machine and a plane laser sensor. Then the proposed strategy is introduced for the adaptation of this environment to a robotic CAD software in order to be able to simulate and validate 3D-laser-scanning paths. The CAPP (Computer Aided Process Planning) system used for the automatic generation of the laser scanning process is also presented.

  5. B4 2 After, 3D Deformation Field From Matching Pre- To Post-Event Aerial LiDAR Point Clouds, The 2010 El Mayor-Cucapah M7.2 Earthquake Case

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Limon-Tirado, J. F.; Arrowsmith, R.; Krishnan, A.; Saripalli, S.; Oskin, M. E.; Glennie, C. L.; Arregui, S. M.; Fletcher, J. M.; Teran, O. J.

    2013-05-01

    horizontal having the latter problems in flat areas as expected. Hybrid approaches, as simple differencing, could be taken in these areas. Outliers were removed from results. ICP detected extraction from quarries developed between the two dates of LiDAR collection and expressed as a negative vertical displacement close to the sites. To improve the accuracy of the 3D displacement field, we intend to reprocess the pre-event source survey data to reduce the systematic error introduced by the sensor. Multidisciplinary approach will be needed to make tectonic inferences from the 3D displacement field revealed by ICP, about the processes at depth expressed at surface.

  6. Development of scanning laser sensor for underwater 3D imaging with the coaxial optics

    NASA Astrophysics Data System (ADS)

    Ochimizu, Hideaki; Imaki, Masaharu; Kameyama, Shumpei; Saito, Takashi; Ishibashi, Shoujirou; Yoshida, Hiroshi

    2014-06-01

    We have developed the scanning laser sensor for underwater 3-D imaging which has the wide scanning angle of 120º (Horizontal) x 30º (Vertical) with the compact size of 25 cm diameter and 60 cm long. Our system has a dome lens and a coaxial optics to realize both the wide scanning angle and the compactness. The system also has the feature in the sensitivity time control (STC) circuit, in which the receiving gain is increased according to the time of flight. The STC circuit contributes to detect a small signal by suppressing the unwanted signals backscattered by marine snows. We demonstrated the system performance in the pool, and confirmed the 3-D imaging with the distance of 20 m. Furthermore, the system was mounted on the autonomous underwater vehicle (AUV), and demonstrated the seafloor mapping at the depth of 100 m in the ocean.

  7. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  8. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    PubMed Central

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  9. 3D shape measurements with a single interferometric sensor for in-situ lathe monitoring

    NASA Astrophysics Data System (ADS)

    Kuschmierz, R.; Huang, Y.; Czarske, J.; Metschke, S.; Löffler, F.; Fischer, A.

    2015-05-01

    Temperature drifts, tool deterioration, unknown vibrations as well as spindle play are major effects which decrease the achievable precision of computerized numerically controlled (CNC) lathes and lead to shape deviations between the processed work pieces. Since currently no measurement system exist for fast, precise and in-situ 3d shape monitoring with keyhole access, much effort has to be made to simulate and compensate these effects. Therefore we introduce an optical interferometric sensor for absolute 3d shape measurements, which was integrated into a working lathe. According to the spindle rotational speed, a measurement rate of 2,500 Hz was achieved. In-situ absolute shape, surface profile and vibration measurements are presented. While thermal drifts of the sensor led to errors of several mµm for the absolute shape, reference measurements with a coordinate machine show, that the surface profile could be measured with an uncertainty below one micron. Additionally, the spindle play of 0.8 µm was measured with the sensor.

  10. Research on Joint Parameter Inversion for an Integrated Underground Displacement 3D Measuring Sensor

    PubMed Central

    Shentu, Nanying; Qiu, Guohua; Li, Qing; Tong, Renyuan; Shentu, Nankai; Wang, Yanjie

    2015-01-01

    Underground displacement monitoring is a key means to monitor and evaluate geological disasters and geotechnical projects. There exist few practical instruments able to monitor subsurface horizontal and vertical displacements simultaneously due to monitoring invisibility and complexity. A novel underground displacement 3D measuring sensor had been proposed in our previous studies, and great efforts have been taken in the basic theoretical research of underground displacement sensing and measuring characteristics by virtue of modeling, simulation and experiments. This paper presents an innovative underground displacement joint inversion method by mixing a specific forward modeling approach with an approximate optimization inversion procedure. It can realize a joint inversion of underground horizontal displacement and vertical displacement for the proposed 3D sensor. Comparative studies have been conducted between the measured and inversed parameters of underground horizontal and vertical displacements under a variety of experimental and inverse conditions. The results showed that when experimentally measured horizontal displacements and vertical displacements are both varied within 0 ~ 30 mm, horizontal displacement and vertical displacement inversion discrepancies are generally less than 3 mm and 1 mm, respectively, under three kinds of simulated underground displacement monitoring circumstances. This implies that our proposed underground displacement joint inversion method is robust and efficient to predict the measuring values of underground horizontal and vertical displacements for the proposed sensor. PMID:25871714

  11. Research on joint parameter inversion for an integrated underground displacement 3D measuring sensor.

    PubMed

    Shentu, Nanying; Qiu, Guohua; Li, Qing; Tong, Renyuan; Shentu, Nankai; Wang, Yanjie

    2015-01-01

    Underground displacement monitoring is a key means to monitor and evaluate geological disasters and geotechnical projects. There exist few practical instruments able to monitor subsurface horizontal and vertical displacements simultaneously due to monitoring invisibility and complexity. A novel underground displacement 3D measuring sensor had been proposed in our previous studies, and great efforts have been taken in the basic theoretical research of underground displacement sensing and measuring characteristics by virtue of modeling, simulation and experiments. This paper presents an innovative underground displacement joint inversion method by mixing a specific forward modeling approach with an approximate optimization inversion procedure. It can realize a joint inversion of underground horizontal displacement and vertical displacement for the proposed 3D sensor. Comparative studies have been conducted between the measured and inversed parameters of underground horizontal and vertical displacements under a variety of experimental and inverse conditions. The results showed that when experimentally measured horizontal displacements and vertical displacements are both varied within 0~30 mm, horizontal displacement and vertical displacement inversion discrepancies are generally less than 3 mm and 1 mm, respectively, under three kinds of simulated underground displacement monitoring circumstances. This implies that our proposed underground displacement joint inversion method is robust and efficient to predict the measuring values of underground horizontal and vertical displacements for the proposed sensor. PMID:25871714

  12. Beam test studies of 3D pixel sensors irradiated non-uniformly for the ATLAS forward physics detector

    NASA Astrophysics Data System (ADS)

    Grinstein, S.; Baselga, M.; Boscardin, M.; Christophersen, M.; Da Via, C.; Dalla Betta, G.-F.; Darbo, G.; Fadeyev, V.; Fleta, C.; Gemme, C.; Grenier, P.; Jimenez, A.; Lopez, I.; Micelli, A.; Nelist, C.; Parker, S.; Pellegrini, G.; Phlips, B.; Pohl, D.-L.; Sadrozinski, H. F.-W.; Sicho, P.; Tsiskaridze, S.

    2013-12-01

    Pixel detectors with cylindrical electrodes that penetrate the silicon substrate (so called 3D detectors) offer advantages over standard planar sensors in terms of radiation hardness, since the electrode distance is decoupled from the bulk thickness. In recent years significant progress has been made in the development of 3D sensors, which culminated in the sensor production for the ATLAS Insertable B-Layer (IBL) upgrade carried out at CNM (Barcelona, Spain) and FBK (Trento, Italy). Based on this success, the ATLAS Forward Physics (AFP) experiment has selected the 3D pixel sensor technology for the tracking detector. The AFP project presents a new challenge due to the need for a reduced dead area with respect to IBL, and the in-homogeneous nature of the radiation dose distribution in the sensor. Electrical characterization of the first AFP prototypes and beam test studies of 3D pixel devices irradiated non-uniformly are presented in this paper.

  13. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  14. Advancing Lidar Sensors Technologies for Next Generation Landing Missions

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Hines, Glenn D.; Roback, Vincent E.; Petway, Larry B.; Barnes, Bruce W.; Brewster, Paul F.; Pierrottet, Diego F.; Bulyshev, Alexander

    2015-01-01

    Missions to solar systems bodies must meet increasingly ambitious objectives requiring highly reliable "precision landing", and "hazard avoidance" capabilities. Robotic missions to the Moon and Mars demand landing at pre-designated sites of high scientific value near hazardous terrain features, such as escarpments, craters, slopes, and rocks. Missions aimed at paving the path for colonization of the Moon and human landing on Mars need to execute onboard hazard detection and precision maneuvering to ensure safe landing near previously deployed assets. Asteroid missions require precision rendezvous, identification of the landing or sampling site location, and navigation to the highly dynamic object that may be tumbling at a fast rate. To meet these needs, NASA Langley Research Center (LaRC) has developed a set of advanced lidar sensors under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. These lidar sensors can provide precision measurement of vehicle relative proximity, velocity, and orientation, and high resolution elevation maps of the surface during the descent to the targeted body. Recent flights onboard Morpheus free-flyer vehicle have demonstrated the viability of ALHAT lidar sensors for future landing missions to solar system bodies.

  15. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  16. Design of a 3D-IC multi-resolution digital pixel sensor

    NASA Astrophysics Data System (ADS)

    Brochard, N.; Nebhen, J.; Dubois, J.; Ginhac, D.

    2016-04-01

    This paper presents a digital pixel sensor (DPS) integrating a sigma-delta analog-to-digital converter (ADC) at pixel level. The digital pixel includes a photodiode, a delta-sigma modulation and a digital decimation filter. It features adaptive dynamic range and multiple resolutions (up to 10-bit) with a high linearity. A specific row decoder and column decoder are also designed to permit to read a specific pixel chosen in the matrix and its neighborhood of 4 x 4. Finally, a complete design with the CMOS 130 nm 3D-IC FaStack Tezzaron technology is also described, revealing a high fill-factor of about 80%.

  17. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  18. Interpixel crosstalk in a 3D-integrated active pixel sensor for x-ray detection

    NASA Astrophysics Data System (ADS)

    LaMarr, Beverly; Bautz, Mark; Foster, Rick; Kissel, Steve; Prigozhin, Gregory; Suntharalingam, Vyshnavi

    2010-07-01

    MIT Lincoln Laboratories and MIT Kavli Institute for Astrophysics and Space Research have developed an active pixel sensor for use as a photon counting device for imaging spectroscopy in the soft X-ray band. A silicon-on-insulator (SOI) readout circuit was integrated with a high-resistivity silicon diode detector array using a per-pixel 3D integration technique developed at Lincoln Laboratory. We have tested these devices at 5.9 keV and 1.5 keV. Here we examine the interpixel cross-talk measured with 5.9 keV X-rays.

  19. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  20. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR

    PubMed Central

    Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people. PMID:26797619

  1. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR.

    PubMed

    Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people. PMID:26797619

  2. Modelling Sensor and Target effects on LiDAR Waveforms

    NASA Astrophysics Data System (ADS)

    Rosette, J.; North, P. R.; Rubio, J.; Cook, B. D.; Suárez, J.

    2010-12-01

    The aim of this research is to explore the influence of sensor characteristics and interactions with vegetation and terrain properties on the estimation of vegetation parameters from LiDAR waveforms. This is carried out using waveform simulations produced by the FLIGHT radiative transfer model which is based on Monte Carlo simulation of photon transport (North, 1996; North et al., 2010). The opportunities for vegetation analysis that are offered by LiDAR modelling are also demonstrated by other authors e.g. Sun and Ranson, 2000; Ni-Meister et al., 2001. Simulations from the FLIGHT model were driven using reflectance and transmittance properties collected from the Howland Research Forest, Maine, USA in 2003 together with a tree list for a 200m x 150m area. This was generated using field measurements of location, species and diameter at breast height. Tree height and crown dimensions of individual trees were calculated using relationships established with a competition index determined for this site. Waveforms obtained by the Laser Vegetation Imaging Sensor (LVIS) were used as validation of simulations. This provided a base from which factors such as slope, laser incidence angle and pulse width could be varied. This has enabled the effect of instrument design and laser interactions with different surface characteristics to be tested. As such, waveform simulation is relevant for the development of future satellite LiDAR sensors, such as NASA’s forthcoming DESDynI mission (NASA, 2010), which aim to improve capabilities of vegetation parameter estimation. ACKNOWLEDGMENTS We would like to thank scientists at the Biospheric Sciences Branch of NASA Goddard Space Flight Center, in particular to Jon Ranson and Bryan Blair. This work forms part of research funded by the NASA DESDynI project and the UK Natural Environment Research Council (NE/F021437/1). REFERENCES NASA, 2010, DESDynI: Deformation, Ecosystem Structure and Dynamics of Ice. http

  3. 3D imaging for ballistics analysis using chromatic white light sensor

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Hildebrandt, Mario; Dittmann, Jana; Clausing, Eric; Fischer, Robert; Vielhauer, Claus

    2012-03-01

    The novel application of sensing technology, based on chromatic white light (CWL), gives a new insight into ballistic analysis of cartridge cases. The CWL sensor uses a beam of white light to acquire highly detailed topography and luminance data simultaneously. The proposed 3D imaging system combines advantages of 3D and 2D image processing algorithms in order to automate the extraction of firearm specific toolmarks shaped on fired specimens. The most important characteristics of a fired cartridge case are the type of the breech face marking as well as size, shape and location of extractor, ejector and firing pin marks. The feature extraction algorithm normalizes the casing surface and consistently searches for the appropriate distortions on the rim and on the primer. The location of the firing pin mark in relation to the lateral scratches on the rim provides unique rotation invariant characteristics of the firearm mechanisms. Additional characteristics are the volume and shape of the firing pin mark. The experimental evaluation relies on the data set of 15 cartridge cases fired from three 9mm firearms of different manufacturers. The results show very high potential of 3D imaging systems for casing-based computer-aided firearm identification, which is prospectively going to support human expertise.

  4. Fast 3D modeling in complex environments using a single Kinect sensor

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Liu, Jingmeng

    2014-02-01

    Three-dimensional (3D) modeling technology has been widely used in inverse engineering, urban planning, robot navigation, and many other applications. How to build a dense model of the environment with limited processing resources is still a challenging topic. A fast 3D modeling algorithm that only uses a single Kinect sensor is proposed in this paper. For every color image captured by Kinect, corner feature extraction is carried out first. Then a spiral search strategy is utilized to select the region of interest (ROI) that contains enough feature corners. Next, the iterative closest point (ICP) method is applied to the points in the ROI to align consecutive data frames. Finally, the analysis of which areas can be walked through by human beings is presented. Comparative experiments with the well-known KinectFusion algorithm have been done and the results demonstrate that the accuracy of the proposed algorithm is the same as KinectFusion but the computing speed is nearly twice of KinectFusion. 3D modeling of two scenes of a public garden and traversable areas analysis in these regions further verified the feasibility of our algorithm.

  5. 3D active edge silicon sensors with different electrode configurations: Radiation hardness and noise performance

    NASA Astrophysics Data System (ADS)

    Da Viá, C.; Bolle, E.; Einsweiler, K.; Garcia-Sciveres, M.; Hasi, J.; Kenney, C.; Linhart, V.; Parker, Sherwood; Pospisil, S.; Rohne, O.; Slavicek, T.; Watts, S.; Wermes, N.

    2009-06-01

    3D detectors, with electrodes penetrating the entire silicon wafer and active edges, were fabricated at the Stanford Nano Fabrication Facility (SNF), California, USA, with different electrode configurations. After irradiation with neutrons up to a fluence of 8.8×10 15 n eq cm -2, they were characterised using an infrared laser tuned to inject ˜2 minimum ionising particles showing signal efficiencies as high as 66% for the configuration with the shortest (56 μm) inter-electrode spacing. Sensors from the same wafer were also bump-bonded to the ATLAS FE-I3 pixel readout chip and their noise characterised. Most probable signal-to-noise ratios were calculated before and after irradiation to be as good as 38:1 after the highest irradiation level with a substrate thickness of 210 μm. These devices are promising candidates for application at the LHC such as the very forward detectors at ATLAS and CMS, the ATLAS B-Layer replacement and the general pixel upgrade. Moreover, 3D sensors could play a role in applications where high speed, high-resolution detectors are required, such as the vertex locators at the proposed Compact Linear Collider (CLIC) at CERN.

  6. 3D Vision by Using Calibration Pattern with Inertial Sensor and RBF Neural Networks.

    PubMed

    Beṣdok, Erkan

    2009-01-01

    Camera calibration is a crucial prerequisite for the retrieval of metric information from images. The problem of camera calibration is the computation of camera intrinsic parameters (i.e., coefficients of geometric distortions, principle distance and principle point) and extrinsic parameters (i.e., 3D spatial orientations: ω, ϕ, κ, and 3D spatial translations: t(x), t(y), t(z)). The intrinsic camera calibration (i.e., interior orientation) models the imaging system of camera optics, while the extrinsic camera calibration (i.e., exterior orientation) indicates the translation and the orientation of the camera with respect to the global coordinate system. Traditional camera calibration techniques require a predefined mathematical-camera model and they use prior knowledge of many parameters. Definition of a realistic camera model is quite difficult and computation of camera calibration parameters are error-prone. In this paper, a novel implicit camera calibration method based on Radial Basis Functions Neural Networks is proposed. The proposed method requires neither an exactly defined camera model nor any prior knowledge about the imaging-setup or classical camera calibration parameters. The proposed method uses a calibration grid-pattern rotated around a static-fixed axis. The rotations of the calibration grid-pattern have been acquired by using an Xsens MTi-9 inertial sensor and in order to evaluate the success of the proposed method, 3D reconstruction performance of the proposed method has been compared with the performance of a traditional camera calibration method, Modified Direct Linear Transformation (MDLT). Extensive simulation results show that the proposed method achieves a better performance than MDLT aspect of 3D reconstruction. PMID:22408542

  7. An Operational Wake Vortex Sensor Using Pulsed Coherent Lidar

    NASA Technical Reports Server (NTRS)

    Barker, Ben C., Jr.; Koch, Grady J.; Nguyen, D. Chi

    1998-01-01

    NASA and FAA initiated a program in 1994 to develop methods of setting spacings for landing aircraft by incorporating information on the real-time behavior of aircraft wake vortices. The current wake separation standards were developed in the 1970's when there was relatively light airport traffic and a logical break point by which to categorize aircraft. Today's continuum of aircraft sizes and increased airport packing densities have created a need for re-evaluation of wake separation standards. The goals of this effort are to ensure that separation standards are adequate for safety and to reduce aircraft spacing for higher airport capacity. Of particular interest are the different requirements for landing under visual flight conditions and instrument flight conditions. Over the years, greater spacings have been established for instrument flight than are allowed for visual flight conditions. Preliminary studies indicate that the airline industry would save considerable money and incur fewer passenger delays if a dynamic spacing system could reduce separations at major hubs during inclement weather to the levels routinely achieved under visual flight conditions. The sensor described herein may become part of this dynamic spacing system known as the "Aircraft VOrtex Spacing System" (AVOSS) that will interface with a future air traffic control system. AVOSS will use vortex behavioral models and short-term weather prediction models in order to predict vortex behavior sufficiently into the future to allow dynamic separation standards to be generated. The wake vortex sensor will periodically provide data to validate AVOSS predictions. Feasibility of measuring wake vortices using a lidar was first demonstrated using a continuous wave (CW) system from NASA Marshall Space Flight Sensor and tested at the Volpe National Transportation Systems Center's wake vortex test site at JFK International Airport. Other applications of CW lidar for wake vortex measurement have been made

  8. Experimental Assessment of the Quanergy m8 LIDAR Sensor

    NASA Astrophysics Data System (ADS)

    Mitteta, M.-A.; Nouira, H.; Roynard, X.; Goulette, F.; Deschaud, J.-E.

    2016-06-01

    In this paper, some experiments with the Quanergy M8 scanning LIDAR system are related. The distance measurement obtained with the Quanergy M8 can be influenced by different factors. Moreover, measurement errors can originate from different sources. The environment in which the measurements are performed has an influence (temperature, light, humidity, etc.). Errors can also arise from the system itself. Then, it is necessary to determine the influence of these parameters on the quality of the distance measurements. For this purpose different studies are presented and analyzed. First, we studied the temporal stability of the sensor by analyzing observations during time. Secondly, the assessment of the distance measurement quality has been conducted. The aim of this step is to detect systematic errors in measurements regarding the range. Differents series of measurements have been conducted : at different range and in diffrent conditions (indoor and outdoor). Finally, we studied the consistency between the differents beam of the LIDAR.

  9. A High-Resolution 3D Weather Radar, MSG, and Lightning Sensor Observation Composite

    NASA Astrophysics Data System (ADS)

    Diederich, Malte; Senf, Fabian; Wapler, Kathrin; Simmer, Clemens

    2013-04-01

    Within the research group 'Object-based Analysis and SEamless prediction' (OASE) of the Hans Ertel Centre for Weather Research programme (HerZ), a data composite containing weather radar, lightning sensor, and Meteosat Second Generation observations is being developed for the use in object-based weather analysis and nowcasting. At present, a 3D merging scheme combines measurements of the Bonn and Jülich dual polarimetric weather radar systems (data provided by the TR32 and TERENO projects) into a 3-dimensional polar-stereographic volume grid, with 500 meters horizontal, and 250 meters vertical resolution. The merging takes into account and compensates for various observational error sources, such as attenuation through hydrometeors, beam blockage through topography and buildings, minimum detectable signal as a function of noise threshold, non-hydrometeor echos like insects, and interference from other radar systems. In addition to this, the effect of convection during the radar 5-minute volume scan pattern is mitigated through calculation of advection vectors from subsequent scans and their use for advection correction when projecting the measurements into space for any desired timestamp. The Meteosat Second Generation rapid scan service provides a scan in 12 spectral visual and infrared wavelengths every 5 minutes over Germany and Europe. These scans, together with the derived microphysical cloud parameters, are projected into the same polar stereographic grid used for the radar data. Lightning counts from the LINET lightning sensor network are also provided for every 2D grid pixel. The combined 3D radar and 2D MSG/LINET data is stored in a fully documented netCDF file for every 5 minute interval, and is made ready for tracking and object based weather analysis. At the moment, the 3D data only covers the Bonn and Jülich area, but the algorithms are planed to be adapted to the newly conceived DWD polarimetric C-Band 5 minute interval volume scan strategy. An

  10. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  11. Particle-based optical pressure sensors for 3D pressure mapping.

    PubMed

    Banerjee, Niladri; Xie, Yan; Chalaseni, Sandeep; Mastrangelo, Carlos H

    2015-10-01

    This paper presents particle-based optical pressure sensors for in-flow pressure sensing, especially for microfluidic environments. Three generations of pressure sensitive particles have been developed- flat planar particles, particles with integrated retroreflectors and spherical microballoon particles. The first two versions suffer from pressure measurement dependence on particles orientation in 3D space and angle of interrogation. The third generation of microspherical particles with spherical symmetry solves these problems making particle-based manometry in microfluidic environment a viable and efficient methodology. Static and dynamic pressure measurements have been performed in liquid medium for long periods of time in a pressure range of atmospheric to 40 psi. Spherical particles with radius of 12 μm and balloon-wall thickness of 0.5 μm are effective for more than 5 h in this pressure range with an error of less than 5%.

  12. Insights from a 3-D temperature sensors mooring on stratified ocean turbulence

    NASA Astrophysics Data System (ADS)

    Haren, Hans; Cimatoribus, Andrea A.; Cyr, Frédéric; Gostiaux, Louis

    2016-05-01

    A unique small-scale 3-D mooring array has been designed consisting of five parallel lines, 100 m long and 4 m apart, and holding up to 550 high-resolution temperature sensors. It is built for quantitative studies on the evolution of stratified turbulence by internal wave breaking in geophysical flows at scales which go beyond that of a laboratory. Here we present measurements from above a steep slope of Mount Josephine, NE Atlantic where internal wave breaking occurs regularly. Vertical and horizontal coherence spectra show an aspect ratio of 0.25-0.5 near the buoyancy frequency, evidencing anisotropy. At higher frequencies, the transition to isotropy (aspect ratio of 1) is found within the inertial subrange. Above the continuous turbulence spectrum in this subrange, isolated peaks are visible that locally increase the spectral width, in contrast with open ocean spectra. Their energy levels are found to be proportional to the tidal energy level.

  13. Quality Assessment of 3d Reconstruction Using Fisheye and Perspective Sensors

    NASA Astrophysics Data System (ADS)

    Strecha, C.; Zoller, R.; Rutishauser, S.; Brot, B.; Schneider-Zapp, K.; Chovancova, V.; Krull, M.; Glassey, L.

    2015-03-01

    Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.

  14. The valuable use of Microsoft Kinect™ sensor 3D kinematic in the rehabilitation process in basketball

    NASA Astrophysics Data System (ADS)

    Braidot, Ariel; Favaretto, Guillermo; Frisoli, Melisa; Gemignani, Diego; Gumpel, Gustavo; Massuh, Roberto; Rayan, Josefina; Turin, Matías

    2016-04-01

    Subjects who practice sports either as professionals or amateurs, have a high incidence of knee injuries. There are a few publications that show studies from a kinematic point of view of lateral-structure-knee injuries, including meniscal (meniscal tears or chondral injury), without anterior cruciate ligament rupture. The use of standard motion capture systems for measuring outdoors sport is hard to implement due to many operative reasons. Recently released, the Microsoft Kinect™ is a sensor that was developed to track movements for gaming purposes and has seen an increased use in clinical applications. The fact that this device is a simple and portable tool allows the acquisition of data of sport common movements in the field. The development and testing of a set of protocols for 3D kinematic measurement using the Microsoft Kinect™ system is presented in this paper. The 3D kinematic evaluation algorithms were developed from information available and with the use of Microsoft’s Software Development Kit 1.8 (SDK). Along with this, an algorithm for calculating the lower limb joints angles was implemented. Thirty healthy adult volunteers were measured, using five different recording protocols for sport characteristic gestures which involve high knee injury risk in athletes.

  15. Upper Extremity 3D Reachable Workspace Assessment in ALS by Kinect sensor

    PubMed Central

    Oskarsson, Bjorn; Joyce, Nanette C.; de Bie, Evan; Nicorici, Alina; Bajcsy, Ruzena; Kurillo, Gregorij; Han, Jay J.

    2016-01-01

    Introduction Reachable workspace is a measure that provides clinically meaningful information regarding arm function. In this study, a Kinect sensor was used to determine the spectrum of 3D reachable workspace encountered in a cross-sectional cohort of individuals with ALS. Method Bilateral 3D reachable workspace was recorded from 10 subjects with ALS and 23 healthy controls. The data were normalized by each individual's arm length to obtain a reachable workspace relative surface area (RSA). Concurrent validity was assessed by correlation with ALSFRSr scores. Results The Kinect-measured reachable workspace RSA differed significantly between the ALS and control subjects (0.579±0.226 vs. 0.786±0.069; P<0.001). The RSA demonstrated correlation with ALSFRSr upper extremity items (Spearman correlation ρ=0.569; P=0.009). With worsening upper extremity function as categorized by the ALSFRSr, the reachable workspace also decreased progressively. Conclusions This study demonstrates the feasibility and potential of using a novel Kinect-based reachable workspace outcome measure in ALS. PMID:25965847

  16. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  17. CLASS: Coherent Lidar Airborne Shear Sensor. Windshear avoidance

    NASA Technical Reports Server (NTRS)

    Targ, Russell

    1991-01-01

    The coherent lidar airborne shear sensor (CLASS) is an airborne CO2 lidar system being designed and developed by Lockheed Missiles and Space Company, Inc. (LMSC) under contract to NASA Langley Research Center. The goal of this program is to develop a system with a 2- to 4-kilometer range that will provide a warning time of 20 to 40 seconds, so that the pilot can avoid the hazards of low-altitude wind shear under all weather conditions. It is a predictive system which will warn the pilot about a hazard that the aircraft will experience at some later time. The ability of the system to provide predictive warnings of clear air turbulence will also be evaluated. A one-year flight evaluation program will measure the line-of-sight wind velocity from a wide variety of wind fields obtained by an airborne radar, an accelerometer-based reactive wind-sensing system, and a ground-based Doppler radar. The success of the airborne lidar system will be determined by its correlation with the windfield as indicated by the onboard reactive system, which indicates the winds actually experienced by the NASA Boeing 737 aircraft.

  18. Spatio-temporal interpolation of soil moisture in 3D+T using automated sensor network data

    NASA Astrophysics Data System (ADS)

    Gasch, C.; Hengl, T.; Magney, T. S.; Brown, D. J.; Gräler, B.

    2014-12-01

    Soil sensor networks provide frequent in situ measurements of dynamic soil properties at fixed locations, producing data in 2- or 3-dimensions and through time (2D+T and 3D+T). Spatio-temporal interpolation of 3D+T point data produces continuous estimates that can then be used for prediction at unsampled times and locations, as input for process models, and can simply aid in visualization of properties through space and time. Regression-kriging with 3D and 2D+T data has successfully been implemented, but currently the field of geostatistics lacks an analytical framework for modeling 3D+T data. Our objective is to develop robust 3D+T models for mapping dynamic soil data that has been collected with high spatial and temporal resolution. For this analysis, we use data collected from a sensor network installed on the R.J. Cook Agronomy Farm (CAF), a 37-ha Long-Term Agro-Ecosystem Research (LTAR) site in Pullman, WA. For five years, the sensors have collected hourly measurements of soil volumetric water content at 42 locations and five depths. The CAF dataset also includes a digital elevation model and derivatives, a soil unit description map, crop rotations, electromagnetic induction surveys, daily meteorological data, and seasonal satellite imagery. The soil-water sensor data, combined with the spatial and temporal covariates, provide an ideal dataset for developing 3D+T models. The presentation will include preliminary results and address main implementation strategies.

  19. Multi-sensor system for surface inspection and 3D-geometry assessment

    NASA Astrophysics Data System (ADS)

    Becker, Markus; Weber, Juergen; Schubert, Erhard

    1997-09-01

    This paper addresses an installed application in quality control where a 100% inspection of geometry (3D) and surface of cuboid (parallelpiped) and ring-shaped magnets is done using a system of 2 CCD matrix cameras, one of which is equipped with on-board processing components and a transmitted-light sensor with microcontroller based data processing for the measurement of the height of the objects. The geometry and surface properties are measured with a diffuse indirect IR-LED flash, mounted in a ring around the object and a telecentric lens to avoid perspective distortions due to different heights of the measured objects. The surface inspection looks for broken pieces, surface faults due to spalling/chipping and for cracks. The second CCD camera uses the same illumination and algorithm to inspect the surface of the other side of the objects after it has been turned around in a return conveyor belt. All components are triggered by separate light barriers and perform their tasks independently. The integration of the results of each measurement is done by an SPC which also controls the actors that handle the three different classes of objects (good, bad, rework). These actors are valves and the objects are separated by pressurized air. The main concern of this paper is the system aspect, how the measurement results are evaluated and combined to achieve a correct classification of the objects which are inspected by three independent sensors and arrive at unpredictable time intervals.

  20. 3-D Flash Lidar Performance in Flight Testing on the Morpheus Autonomous, Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, Vincent E.; Amzajerdian, Farzin; Bulyshev, Alexander E.; Brewster, Paul F.; Barnes, Bruce W.

    2016-01-01

    For the first time, a 3-D imaging Flash Lidar instrument has been used in flight to scan a lunar-like hazard field, build a 3-D Digital Elevation Map (DEM), identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The flight tests served as the TRL 6 demo of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) system and included launch from NASA-Kennedy, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range. The ALHAT project developed a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar is a second generation, compact, real-time, air-cooled instrument. Based upon extensive on-ground characterization at flight ranges, the Flash Lidar was shown to be capable of imaging hazards from a slant range of 1 km with an 8 cm range precision and a range accuracy better than 35 cm, both at 1-delta. The Flash Lidar identified landing hazards as small as 30 cm from the maximum slant range which Morpheus could achieve (450 m); however, under certain wind conditions it was susceptible to scintillation arising from air heated by the rocket engine and to pre-triggering on a dust cloud created during launch and transported down-range by wind.

  1. 3D flash lidar performance in flight testing on the Morpheus autonomous, rocket-propelled lander to a lunar-like hazard field

    NASA Astrophysics Data System (ADS)

    Roback, Vincent E.; Amzajerdian, Farzin; Bulyshev, Alexander E.; Brewster, Paul F.; Barnes, Bruce W.

    2016-05-01

    For the first time, a 3-D imaging Flash Lidar instrument has been used in flight to scan a lunar-like hazard field, build a 3-D Digital Elevation Map (DEM), identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The flight tests served as the TRL 6 demo of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) system and included launch from NASA-Kennedy, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range. The ALHAT project developed a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar is a second generation, compact, real-time, air-cooled instrument. Based upon extensive on-ground characterization at flight ranges, the Flash Lidar was shown to be capable of imaging hazards from a slant range of 1 km with an 8 cm range precision and a range accuracy better than 35 cm, both at 1-σ. The Flash Lidar identified landing hazards as small as 30 cm from the maximum slant range which Morpheus could achieve (450 m); however, under certain wind conditions it was susceptible to scintillation arising from air heated by the rocket engine and to pre-triggering on a dust cloud created during launch and transported down-range by wind.

  2. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  3. Enhanced detection of 3D individual trees in forested areas using airborne full-waveform LiDAR data by combining normalized cuts with spatial density clustering

    NASA Astrophysics Data System (ADS)

    Yao, W.; Krzystek, P.; Heurich, M.

    2013-10-01

    A detailed understanding of the spatial distribution of forest understory is important but difficult. LiDAR remote sensing has been developing as a promising additional instrument to the conventional field work towards automated forest inventory. Unfortunately, understory (up to 50% of the top-tree height) in mixed and multilayered forests is often ignored due to a difficult observation scenario and limitation of the tree detection algorithm. Currently, the full-waveform (FWF) LiDAR with high penetration ability against overstory crowns can give us new hope to resolve the forest understory. Former approach based on 3D segmentation confirmed that the tree detection rates in both middle and lower forest layers are still low. Therefore, detecting sub-dominant and suppressed trees cannot be regarded as fully solved. In this work, we aim to improve the performance of the FWF laser scanner for the mapping of forest understory. The paper is to develop an enhanced methodology for detecting 3D individual trees by partitioning point clouds of airborne LiDAR. After extracting 3D coordinates of the laser beam echoes, the pulse intensity and width by waveform decomposition, the newly developed approach resolves 3D single trees are by an integrated approach, which delineates tree crowns by applying normalized cuts segmentation to the graph structure of local dense modes in point clouds constructed by mean shift clustering. In the context of our strategy, the mean shift clusters approximate primitives of (sub) single trees in LiDAR data and allow to define more significant features to reflect geometric and reflectional characteristics towards the single tree level. The developed methodology can be regarded as an object-based point cloud analysis approach for tree detection and is applied to datasets captured with the Riegl LMS-Q560 laser scanner at a point density of 25 points/m2 in the Bavarian Forest National Park, Germany, respectively under leaf-on and leaf-off conditions

  4. Using a magnetite/thermoplastic composite in 3D printing of direct replacements for commercially available flow sensors

    NASA Astrophysics Data System (ADS)

    Leigh, S. J.; Purssell, C. P.; Billson, D. R.; Hutchins, D. A.

    2014-09-01

    Flow sensing is an essential technique required for a wide range of application environments ranging from liquid dispensing to utility monitoring. A number of different methodologies and deployment strategies have been devised to cover the diverse range of potential application areas. The ability to easily create new bespoke sensors for new applications is therefore of natural interest. Fused deposition modelling is a 3D printing technology based upon the fabrication of 3D structures in a layer-by-layer fashion using extruded strands of molten thermoplastic. The technology was developed in the late 1980s but has only recently come to more wide-scale attention outside of specialist applications and rapid prototyping due to the advent of low-cost 3D printing platforms such as the RepRap. Due to the relatively low-cost of the printers and feedstock materials, these printers are ideal candidates for wide-scale installation as localized manufacturing platforms to quickly produce replacement parts when components fail. One of the current limitations with the technology is the availability of functional printing materials to facilitate production of complex functional 3D objects and devices beyond mere concept prototypes. This paper presents the formulation of a simple magnetite nanoparticle-loaded thermoplastic composite and its incorporation into a 3D printed flow-sensor in order to mimic the function of a commercially available flow-sensing device. Using the multi-material printing capability of the 3D printer allows a much smaller amount of functional material to be used in comparison to the commercial flow sensor by only placing the material where it is specifically required. Analysis of the printed sensor also revealed a much more linear response to increasing flow rate of water showing that 3D printed devices have the potential to at least perform as well as a conventionally produced sensor.

  5. 3D modeling of light interception in heterogeneous forest canopies using ground-based LiDAR data

    NASA Astrophysics Data System (ADS)

    Van der Zande, Dimitry; Stuckens, Jan; Verstraeten, Willem W.; Mereu, Simone; Muys, Bart; Coppin, Pol

    2011-10-01

    A methodology is presented that describes the direct interaction of a forest canopy with incoming radiation using terrestrial LiDAR based vegetation structure in a radiative transfer model. The proposed 'Voxel-based Light Interception Model' (VLIM) is designed to estimate the Percentage of Above Canopy Light (PACL) at any given point of the forest scene. First a voxel-based representation of trees is derived from terrestrial LiDAR data as structural input to model and analyze the light interception of canopies at near leaf level scale. Nine virtual forest stands of three species (beech, poplar, plantain) were generated by means of stochastic L-systems as tree descriptors. Using ray tracer technology hemispherical LiDAR measurements were simulated inside these virtual forests. The leaf area density (LAD) estimates derived from the LiDAR datasets resulted in a mean absolute error of 32.57% without correction and 16.31% when leaf/beam interactions were taken into account. Next, comparison of PACL estimates, computed with VLIM with fully rendered light distributions throughout the canopy based on the L-systems, yielded a mean absolute error of 5.78%. This work shows the potential of the VLIM to model both instantaneous light interception by a canopy as well as average light distributions for entire seasons.

  6. Coherent Doppler Wind Lidar Development at NASA Langley Research Center for NASA Space-Based 3-D Winds Mission

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Kavaya, Michael J.; Yu, Jirong; Koch, Grady J.

    2012-01-01

    We review the 20-plus years of pulsed transmit laser development at NASA Langley Research Center (LaRC) to enable a coherent Doppler wind lidar to measure global winds from earth orbit. We briefly also discuss the many other ingredients needed to prepare for this space mission.

  7. Practical issues in automatic 3D reconstruction and navigation applications using man-portable or vehicle-mounted sensors

    NASA Astrophysics Data System (ADS)

    Harris, Chris; Stennett, Carl

    2012-09-01

    The navigation of an autonomous robot vehicle and person localisation in the absence of GPS both rely on using local sensors to build a model of the 3D environment. Accomplishing such capabilities is not straightforward - there are many choices to be made of sensor and processing algorithms. Roke Manor Research has broad experience in this field, gained from building and characterising real-time systems that operate in the real world. This includes developing localization for planetary and indoor rovers, model building of indoor and outdoor environments, and most recently, the building of texture-mapped 3D surface models.

  8. 3D force and displacement sensor for SFA and AFM measurements.

    PubMed

    Kristiansen, Kai; McGuiggan, Patricia; Carver, Greg; Meinhart, Carl; Israelachvili, Jacob

    2008-02-19

    A new device has been designed, and a prototype built and tested, that can simultaneously measure the displacements and/or the components of a force in three orthogonal directions. The "3D sensor" consists of four or eight strain gauges attached to the four arms of a single cross-shaped force-measuring cantilever spring. Finite element modeling (FEM) was performed to optimize the design configuration to give desired sensitivity of force, displacement, stiffness, and resonant frequency in each direction (x, y, and z) which were tested on a "mesoscale" device and found to agree with the predicted values to within 4-10%. The device can be fitted into a surface forces apparatus (SFA), and a future smaller "microscale" microfabricated version can be fitted into an atomic force microscope (AFM) for simultaneous measurements of the normal and lateral (friction) forces between a tip (or colloidal bead probe) and a surface, and the topography of the surface. Results of the FEM analysis are presented, and approximate equations derived using linear elasticity theory are given for the sensitivity in each direction. Initial calibrations and measurements of thin film rheology (lubrication forces) using the "mesoscale" prototype show the device to function as expected.

  9. Factors contributing to accuracy in the estimation of the woody canopy leaf area density profile using 3D portable lidar imaging.

    PubMed

    Hosoi, Fumiki; Omasa, Kenji

    2007-01-01

    Factors that contribute to the accuracy of estimating woody canopy's leaf area density (LAD) using 3D portable lidar imaging were investigated. The 3D point cloud data for a Japanese zelkova canopy [Zelkova serrata (Thunberg) Makino] were collected using a portable scanning lidar from several points established on the ground and at 10 m above the ground. The LAD profiles were computed using voxel-based canopy profiling (VCP). The best LAD results [a root-mean-square error (RMSE) of 0.21 m(2) m(-3)] for the measurement plot (corresponding to an absolute LAI error of 9.5%) were obtained by compositing the ground-level and 10 m measurements. The factors that most strongly affected estimation accuracy included the presence of non-photosynthetic tissues, distribution of leaf inclination angles, number (N) of incident laser beams in each region within the canopy, and G(theta(m)) (the mean projection of a unit leaf area on a plane perpendicular to the direction of the laser beam at the measurement zenith angle of theta(m)). The influences of non-photosynthetic tissues and leaf inclination angle on the estimates amounted to 4.2-32.7% and 7.2-94.2%, respectively. The RMSE of the LAD estimations was expressed using a function of N and G(theta(m)). PMID:17977852

  10. Combination of spaceborne sensor(s) and 3-D aerosol models to assess global daily near-surface air quality

    NASA Astrophysics Data System (ADS)

    Kacenelenbogen, M.; Redemann, J.; Russell, P. B.

    2009-12-01

    Aerosol Particulate Matter (PM), measured by ground-based monitoring stations, is used as a standard by the EPA (Environmental Protection Agency) to evaluate daily air quality. PM monitoring is particularly important for human health protection because the exposure to suspended particles can contribute, among others, to lung and respiratory diseases and even premature death. However, most of the PM monitoring stations are located close to cities, leaving large areas without any operational data. Satellite remote sensing is well suited for a global coverage of the aerosol load and can provide an independent and supplemental data source to in situ monitoring. Nevertheless, PM at the ground cannot easily be determined from satellite AOD (Aerosol Optical Depth) without additional information on the optical/microphysical properties and vertical distribution of the aerosols. The objective of this study is to explore the efficacy and accuracy of combining a 3-D aerosol transport model and satellite remote sensing as a cost-effective approach for estimating ground-level PM on a global and daily basis. The estimation of the near-surface PM will use the vertical distribution (and, if possible, the physicochemical properties) of the aerosols inferred from a transport model and the measured total load of particles in the atmospheric column retrieved by satellite sensor(s). The first step is to select a chemical transport model (CTM) that provides “good” simulated aerosol vertical profiles. A few global (e.g., WRF-Chem-GOCART) or regional (e.g., MM5-CMAQ, PM-CAMx) CTM will be compared during selected airborne campaigns like ARCTAS-CARB (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites- California Air Resources Board). The next step will be to devise an algorithm that combines the satellite and model data to infer PM mass estimates at the ground, after evaluating different spaceborne instruments and possible multi-sensor combinations.

  11. Development of 3D carbon nanotube interdigitated finger electrodes on polymer substrate for flexible capacitive sensor application

    NASA Astrophysics Data System (ADS)

    Hu, Chih-Fan; Wang, Jhih-Yu; Liu, Yu-Chia; Tsai, Ming-Han; Fang, Weileun

    2013-11-01

    This study reports a novel approach to the implementation of 3D carbon nanotube (CNT) interdigitated finger electrodes on flexible polymer, and the detection of strain, bending curvature, tactile force and proximity distance are demonstrated. The merits of the presented CNT-based flexible sensor are as follows: (1) the silicon substrate is patterned to enable the formation of 3D vertically aligned CNTs on the substrate surface; (2) polymer molding on the silicon substrate with 3D CNTs is further employed to transfer the 3D CNTs to the flexible polymer substrate; (3) the CNT-polymer composite (˜70 μm in height) is employed to form interdigitated finger electrodes to increase the sensing area and initial capacitance; (4) other structures such as electrical routings, resistors and mechanical supporters are also available using the CNT-polymer composite. The preliminary fabrication results demonstrate a flexible capacitive sensor with 50 μm high CNT interdigitated electrodes on a poly-dimethylsiloxane substrate. The tests show that the typical capacitance change is several dozens of fF and the gauge factor is in the range of 3.44-4.88 for strain and bending curvature measurement; the sensitivity of the tactile sensor is 1.11% N-1 a proximity distance near 2 mm away from the sensor can be detected.

  12. Rapid 3D Patterning of Poly(acrylic acid) Ionic Hydrogel for Miniature pH Sensors.

    PubMed

    Yin, Ming-Jie; Yao, Mian; Gao, Shaorui; Zhang, A Ping; Tam, Hwa-Yaw; Wai, Ping-Kong A

    2016-02-17

    Poly(acrylic acid) (PAA), as a highly ionic conductive hydrogel, can reversibly swell/deswell according to the surrounding pH conditions. An optical maskless -stereolithography technology is presented to rapidly 3D pattern PAA for device fabrication. A highly sensitive miniature pH sensor is demonstrated by in situ printing of periodic PAA micropads on a tapered optical microfiber.

  13. Piezoresistive Sensor with High Elasticity Based on 3D Hybrid Network of Sponge@CNTs@Ag NPs.

    PubMed

    Zhang, Hui; Liu, Nishuang; Shi, Yuling; Liu, Weijie; Yue, Yang; Wang, Siliang; Ma, Yanan; Wen, Li; Li, Luying; Long, Fei; Zou, Zhengguang; Gao, Yihua

    2016-08-31

    Pressure sensors with high elasticity are in great demand for the realization of intelligent sensing, but there is a need to develope a simple, inexpensive, and scalable method for the manufacture of the sensors. Here, we reported an efficient, simple, facile, and repeatable "dipping and coating" process to manufacture a piezoresistive sensor with high elasticity, based on homogeneous 3D hybrid network of carbon nanotubes@silver nanoparticles (CNTs@Ag NPs) anchored on a skeleton sponge. Highly elastic, sensitive, and wearable sensors are obtained using the porous structure of sponge and the synergy effect of CNTs/Ag NPs. Our sensor was also tested for over 2000 compression-release cycles, exhibiting excellent elasticity and cycling stability. Sensors with high performance and a simple fabrication process are promising devices for commercial production in various electronic devices, for example, sport performance monitoring and man-machine interfaces. PMID:27482721

  14. Piezoresistive Sensor with High Elasticity Based on 3D Hybrid Network of Sponge@CNTs@Ag NPs.

    PubMed

    Zhang, Hui; Liu, Nishuang; Shi, Yuling; Liu, Weijie; Yue, Yang; Wang, Siliang; Ma, Yanan; Wen, Li; Li, Luying; Long, Fei; Zou, Zhengguang; Gao, Yihua

    2016-08-31

    Pressure sensors with high elasticity are in great demand for the realization of intelligent sensing, but there is a need to develope a simple, inexpensive, and scalable method for the manufacture of the sensors. Here, we reported an efficient, simple, facile, and repeatable "dipping and coating" process to manufacture a piezoresistive sensor with high elasticity, based on homogeneous 3D hybrid network of carbon nanotubes@silver nanoparticles (CNTs@Ag NPs) anchored on a skeleton sponge. Highly elastic, sensitive, and wearable sensors are obtained using the porous structure of sponge and the synergy effect of CNTs/Ag NPs. Our sensor was also tested for over 2000 compression-release cycles, exhibiting excellent elasticity and cycling stability. Sensors with high performance and a simple fabrication process are promising devices for commercial production in various electronic devices, for example, sport performance monitoring and man-machine interfaces.

  15. Using LiDAR Data to Measure the 3D Green Biomass of Beijing Urban Forest in China

    PubMed Central

    He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu

    2013-01-01

    The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m3, of which coniferous was 28.7871 million m3 and broad-leaf was 370.3424 million m3. The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities. PMID:24146792

  16. Using LiDAR data to measure the 3D green biomass of Beijing urban forest in China.

    PubMed

    He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu

    2013-01-01

    The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m(3), of which coniferous was 28.7871 million m(3) and broad-leaf was 370.3424 million m(3). The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities. PMID:24146792

  17. An Inspire-Konform 3d Building Model of Bavaria Using Cadastre Information, LIDAR and Image Matching

    NASA Astrophysics Data System (ADS)

    Roschlaub, R.; Batscheider, J.

    2016-06-01

    The federal governments of Germany endeavour to create a harmonized 3D building data set based on a common application schema (the AdV-CityGML-Profile). The Bavarian Agency for Digitisation, High-Speed Internet and Surveying has launched a statewide 3D Building Model with standardized roof shapes for all 8.1 million buildings in Bavaria. For the acquisition of the 3D Building Model LiDAR-data or data from Image Matching are used as basis in addition with the building ground plans of the official cadastral map. The data management of the 3D Building Model is carried out by a central database with the usage of a nationwide standardized CityGML-Profile of the AdV. The update of the 3D Building Model for new buildings is done by terrestrial building measurements within the maintenance process of the cadaster and from image matching. In a joint research project, the Bavarian State Agency for Surveying and Geoinformation and the TUM, Chair of Geoinformatics, transformed an AdV-CityGML-Profilebased test data set of Bavarian LoD2 building models into an INSPIRE-compliant schema. For the purpose of a transformation of such kind, the AdV provides a data specification, a test plan for 3D Building Models and a mapping table. The research project examined whether the transformation rules defined in the mapping table, were unambiguous and sufficient for implementing a transformation of LoD2 data based on the AdV-CityGML-Profile into the INSPIRE schema. The proof of concept was carried out by transforming production data of the Bavarian 3D Building Model in LoD2 into the INSPIRE BU schema. In order to assure the quality of the data to be transformed, the test specifications according to the test plan for 3D Building Models of the AdV were carried out. The AdV mapping table was checked for completeness and correctness and amendments were made accordingly.

  18. TiO2 particles on a 3D network of single-walled nanotubes for NH3 gas sensors.

    PubMed

    Jo, Yong Deok; Lee, Sooken; Seo, Jeongeun; Lee, Soobum; Ann, Doyeon; Lee, Haiwon

    2014-12-01

    Ammonia (NH3) gas is one of the gases which causes damage to environment such as acidification and climate change. In this study, a gas sensor based on the three-dimensional (3D) network of single-walled nanotubes (SWNTs) was fabricated for the detection of NH3 gas in dry air. The sensor showed enhanced performance due to the fast gas diffusion rate and weak interactions between the carbon nanotubes and the substrate. Metal oxide particles were introduced to enhance the performance of the gas sensor. Atomic layer deposition (ALD) was employed to deposit the metal oxide in the complex structure, and good control over thickness was achieved. The hybrid gas sensor consisting of the 3D network of SWNTs with anatase TiO2 particles showed stable, repeatable, and enhanced gas sensor performance. The phase of TiO2 particles was characterized by Raman and the morphology of the TiO2 particles on the 3D network of SWNTs was analyzed by transmission electron microscope.

  19. Doppler lidar atmospheric wind sensor: reevaluation of a 355-nm incoherent Doppler lidar.

    PubMed

    Rees, D; McDermid, I S

    1990-10-01

    We reevaluate the performance of an incoherent Doppler lidar system operating at 354.7 nm, based on recent but well-proven Nd:YAG laser technology and currently available optical sensors. For measurements in the lower troposphere, up to ~5 km altitude, and also in the Junge-layer of the lower stratosphere, a wind component accuracy of +/- 2 m/s and a vertical resolution of 1 km should be obtained with a single pulse from a 1-J laser, operating at Polar Platform altitudes (700-850 km) and high scan angles (55 degrees ). For wind measurements in the upper troposphere (above ~5 km altitude) and stratosphere (above and below the Junge layer) the concentration of scatterers is much lower and higher energies would be required to maintain +/-2m/s accuracy and 1 km vertical resolution, using single laser pulses. Except for the region in the vicinity of the tropopause (10 km altitude), a 5-J pulse would be appropriate to make measurements in these regions. The worst case is encountered near 10 km altitude, where we calculate that a 15-J pulse would be required. To reduce this energy requirement, we would propose to degrade the altitude resolution from 1 km to 2-3 km, and also to consider averaging multiple pulses. Degrading the vertical and horizontal resolution could provide an acceptable method of obtaining the required wind accuracy without the penalty of using a laser of higher output power. We believe that a Doppler lidar system, employing a near ultraviolet laser with a pulse energy of 5 J, could achieve the performance objectives required by the major potential users of a global space-borne wind observing system.

  20. 3D sensor placement strategy using the full-range pheromone ant colony system

    NASA Astrophysics Data System (ADS)

    Shuo, Feng; Jingqing, Jia

    2016-07-01

    An optimized sensor placement strategy will be extremely beneficial to ensure the safety and cost reduction considerations of structural health monitoring (SHM) systems. The sensors must be placed such that important dynamic information is obtained and the number of sensors is minimized. The practice is to select individual sensor directions by several 1D sensor methods and the triaxial sensors are placed in these directions for monitoring. However, this may lead to non-optimal placement of many triaxial sensors. In this paper, a new method, called FRPACS, is proposed based on the ant colony system (ACS) to solve the optimal placement of triaxial sensors. The triaxial sensors are placed as single units in an optimal fashion. And then the new method is compared with other algorithms using Dalian North Bridge. The computational precision and iteration efficiency of the FRPACS has been greatly improved compared with the original ACS and EFI method.

  1. Diborane Electrode Response in 3D Silicon Sensors for the CMS and ATLAS Experiments

    SciTech Connect

    Brown, Emily R.; /Reed Coll. /SLAC

    2011-06-22

    Unusually high leakage currents have been measured in test wafers produced by the manufacturer SINTEF containing 3D pixel silicon sensor chips designed for the ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) experiments. Previous data has shown the CMS chips as having a lower leakage current after processing than ATLAS chips. Some theories behind the cause of the leakage currents include the dicing process and the usage of copper in bump bonding, and with differences in packaging and handling between the ATLAS and CMS chips causing the disparity between the two. Data taken at SLAC from a SINTEF wafer with electrodes doped with diborane and filled with polysilicon, before dicing, and with indium bumps added contradicts this past data, as ATLAS chips showed a lower leakage current than CMS chips. It also argues against copper in bump bonding and the dicing process as main causes of leakage current as neither were involved on this wafer. However, they still display an extremely high leakage current, with the source mostly unknown. The SINTEF wafer shows completely different behavior than the others, as the FEI3s actually performed better than the CMS chips. Therefore this data argues against the differences in packaging and handling or the intrinsic geometry of the two as a cause in the disparity between the leakage currents of the chips. Even though the leakage current in the FEI3s overall is lower, the current is still significant enough to cause problems. As this wafer was not diced, nor had it any copper added for bump bonding, this data argues against the dicing and bump bonding as causes for leakage current. To compliment this information, more data will be taken on the efficiency of the individual electrodes of the ATLAS and CMS chips on this wafer. The electrodes will be shot perpendicularly with a laser to test the efficiency across the width of the electrode. A mask with pinholes has been made to focus the laser to a beam smaller than the

  2. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm. PMID:27410800

  3. Retrieving Leaf Area Index and Foliage Profiles Through Voxelized 3-D Forest Reconstruction Using Terrestrial Full-Waveform and Dual-Wavelength Echidna Lidars

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yang, X.; Li, Z.; Schaaf, C.; Wang, Z.; Yao, T.; Zhao, F.; Saenz, E.; Paynter, I.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Martel, J.; Howe, G.; Hewawasam, K.; Jupp, D.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Measuring and monitoring canopy biophysical parameters provide a baseline for carbon flux studies related to deforestation and disturbance in forest ecosystems. Terrestrial full-waveform lidar systems, such as the Echidna Validation Instrument (EVI) and its successor Dual-Wavelength Echidna Lidar (DWEL), offer rapid, accurate, and automated characterization of forest structure. In this study, we apply a methodology based on voxelized 3-D forest reconstructions built from EVI and DWEL scans to directly estimate two important biophysical parameters: Leaf Area Index (LAI) and foliage profile. Gap probability, apparent reflectance, and volume associated with the laser pulse footprint at the observed range are assigned to the foliage scattering events in the reconstructed point cloud. Leaf angle distribution is accommodated with a simple model based on gap probability with zenith angle as observed in individual scans of the stand. The DWEL instrument, which emits simultaneous laser pulses at 1064 nm and 1548 nm wavelengths, provides a better capability to separate trunk and branch hits from foliage hits due to water absorption by leaf cellular contents at 1548 nm band. We generate voxel datasets of foliage points using a classification methodology solely based on pulse shape for scans collected by EVI and with pulse shape and band ratio for scans collected by DWEL. We then compare the LAIs and foliage profiles retrieved from the voxel datasets of the two instruments at the same red fir site in Sierra National Forest, CA, with each other and with observations from airborne and field measurements. This study further tests the voxelization methodology in obtaining LAI and foliage profiles that are largely free of clumping effects and returns from woody materials in the canopy. These retrievals can provide a valuable 'ground-truth' validation data source for large-footprint spaceborne or airborne lidar systems retrievals.

  4. 3D-stacked Ag nanowires for efficient plasmonic light absorbers and SERS sensors

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Ho; Mun, ChaeWon; Lee, MinKyoung; Park, Sung-Gyu

    2016-04-01

    We report new 3D hybrid plasmonic nanostructures exhibiting highly sensitive SERS-based sensing performance, utilizing efficient plasmonic light absorption and analyte-enrichment effect. The hybrid plasmonic nanostructures composed of 3D-stacked Ag NWs and NPs separated by a thin hydrophobic dielectric interlayer. A hydrophobic polydimethylsiloxane (PDMS) interlayer provides dielectric nanogap between Ag NWs and NPs, and analyte-enrichment effect due to the inhibition of drop spreading. The 3D hybrid PDMS-interlayered Ag nanostructures showed hydrophobicity with initial contact angle of 137.6°. Utilizing the analyte-enrichment strategy, the PDMS-interlayered Ag nanostructures exhibited an enhanced sensitivity of methylene blue molecules by a factor of 10 (limit of detection, LOD of 1.5 nM), compared to the alumina-separated 3D hybrid Ag nanostructures.

  5. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles

    NASA Astrophysics Data System (ADS)

    Yilmazoglu, O.; Yadav, S.; Cicek, D.; Schneider, J. J.

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm‑1) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ∼11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30–50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  6. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles

    NASA Astrophysics Data System (ADS)

    Yilmazoglu, O.; Yadav, S.; Cicek, D.; Schneider, J. J.

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm-1) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ˜11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30-50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  7. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles.

    PubMed

    Yilmazoglu, O; Yadav, S; Cicek, D; Schneider, J J

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm(-1)) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ∼11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30-50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  8. Using a 2D displacement sensor to derive 3D displacement information

    NASA Technical Reports Server (NTRS)

    Soares, Schubert F. (Inventor)

    2002-01-01

    A 2D displacement sensor is used to measure displacement in three dimensions. For example, the sensor can be used in conjunction with a pulse-modulated or frequency-modulated laser beam to measure displacement caused by deformation of an antenna on which the sensor is mounted.

  9. New insights into 3D calving investigations: use of Terrestrial LiDAR for monitoring the Perito Moreno glacier front (Southern Patagonian Ice Fields, Argentina)

    NASA Astrophysics Data System (ADS)

    Abellan, Antonio; Penna, Ivanna; Daicz, Sergio; Carrea, Dario; Derron, Marc-Henri; Guerin, Antoine; Jaboyedoff, Michel

    2015-04-01

    There exists a great incertitude concerning the processes that control and lead to glaciers' fronts disintegration, including the laws and the processes governing ice calving phenomena. The record of surface processes occurring at glacier's front has proven problematic due to the highly dynamic nature of the calving phenomenon, creating a great uncertainty concerning the processes and forms controlling and leading to the occurrence of discrete calving events. For instance, some common observational errors for quantifying the sudden occurrence of the calving phenomena include the insufficient spatial and/or temporal resolution of the conventional photogrammetric techniques and satellites missions. Furthermore, a lack of high quality four dimensional data of failures is currently affecting our ability to straightforward analyse and predict the glaciers' dynamics. In order to overcome these limitations, we used a terrestrial LiDAR sensor (Optech Ilris 3D-LR) for intensively monitoring the changes occurred at one of the most impressive calving glacier fronts: the Perito Moreno glacier, located in the Southern Patagonian Ice Fields (Argentina). Using this system, we were able to capture at an unprecedented level of detail the three-dimensional geometry of the glacier's front during five days (from 10th to 14th of March 2014). Each data collection, which was acquired at a mean interval of 20 minutes each, consisted in the automatic acquisition of several million points at a mean density between 100-200 points per square meter. The maximum attainable range for the utilized wavelength of the Ilris-LR system (1064 nm) was around 500 meters over massive ice (showing no-significant loss of information), being this distance considerably reduced on crystalline or wet ice short after the occurrence of calving events. By comparing successive three-dimensional datasets, we have investigated not only the magnitude and frequency of several ice failures at the glacier's terminus, but

  10. Light-Weight Sensor Package for Precision 3d Measurement with Micro Uavs E.G. Power-Line Monitoring

    NASA Astrophysics Data System (ADS)

    Kuhnert, K.-D.; Kuhnert, L.

    2013-08-01

    The paper describes a new sensor package for micro or mini UAVs and one application that has been successfully implemented with this sensor package. It is intended for 3D measurement of landscape or large outdoor structures for mapping or monitoring purposes. The package can be composed modularly into several configurations. It may contain a laser-scanner, camera, IMU, GPS and other sensors as required by the application. Also different products of the same sensor type have been integrated. Always it contains its own computing infrastructure and may be used for intelligent navigation, too. It can be operated in cooperation with different drones but also completely independent of the type of drone it is attached to. To show the usability of the system, an application in monitoring high-voltage power lines that has been successfully realised with the package is described in detail.

  11. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  12. Optimal Sensor Placement for Measuring Physical Activity with a 3D Accelerometer

    PubMed Central

    Boerema, Simone T.; van Velsen, Lex; Schaake, Leendert; Tönis, Thijs M.; Hermens, Hermie J.

    2014-01-01

    Accelerometer-based activity monitors are popular for monitoring physical activity. In this study, we investigated optimal sensor placement for increasing the quality of studies that utilize accelerometer data to assess physical activity. We performed a two-staged study, focused on sensor location and type of mounting. Ten subjects walked at various walking speeds on a treadmill, performed a deskwork protocol, and walked on level ground, while simultaneously wearing five ProMove2 sensors with a snug fit on an elastic waist belt. We found that sensor location, type of activity, and their interaction-effect affected sensor output. The most lateral positions on the waist belt were the least sensitive for interference. The effect of mounting was explored, by making two subjects repeat the experimental protocol with sensors more loosely fitted to the elastic belt. The loose fit resulted in lower sensor output, except for the deskwork protocol, where output was higher. In order to increase the reliability and to reduce the variability of sensor output, researchers should place activity sensors on the most lateral position of a participant's waist belt. If the sensor hampers free movement, it may be positioned slightly more forward on the belt. Finally, sensors should be fitted tightly to the body. PMID:24553085

  13. D Geological Outcrop Characterization: Automatic Detection of 3d Planes (azimuth and Dip) Using LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B.

    2016-06-01

    Terrestrial laser scanning constitutes a powerful method in spatial information data acquisition and allows for geological outcrops to be captured with high resolution and accuracy. A crucial aspect for numerous geologic applications is the extraction of rock surface orientations from the data. This paper focuses on the detection of planes in rock surface data by applying a segmentation algorithm directly to a 3D point cloud. Its performance is assessed considering (1) reduced spatial resolution of data and (2) smoothing in the course of data pre-processing. The methodology is tested on simulations of progressively reduced spatial resolution defined by varying point cloud density. Smoothing of the point cloud data is implemented by modifying the neighborhood criteria during normals estima-tion. The considerable alteration of resulting planes emphasizes the influence of smoothing on the plane detection prior to the actual segmentation. Therefore, the parameter needs to be set in accordance with individual purposes and respective scales of studies. Fur-thermore, it is concluded that the quality of segmentation results does not decline even when the data volume is significantly reduced down to 10%. The azimuth and dip values of individual segments are determined for planes fit to the points belonging to one segment. Based on these results, azimuth and dip as well as strike character of the surface planes in the outcrop are assessed. Thereby, this paper contributes to a fully automatic and straightforward workflow for a comprehensive geometric description of outcrops in 3D.

  14. 3D Lasers Increase Efficiency, Safety of Moving Machines

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.

  15. Multipath Estimation in Urban Environments from Joint GNSS Receivers and LiDAR Sensors

    PubMed Central

    Ali, Khurram; Chen, Xin; Dovis, Fabio; De Castro, David; Fernández, Antonio J.

    2012-01-01

    In this paper, multipath error on Global Navigation Satellite System (GNSS) signals in urban environments is characterized with the help of Light Detection and Ranging (LiDAR) measurements. For this purpose, LiDAR equipment and Global Positioning System (GPS) receiver implementing a multipath estimating architecture were used to collect data in an urban environment. This paper demonstrates how GPS and LiDAR measurements can be jointly used to model the environment and obtain robust receivers. Multipath amplitude and delay are estimated by means of LiDAR feature extraction and multipath mitigation architecture. The results show the feasibility of integrating the information provided by LiDAR sensors and GNSS receivers for multipath mitigation. PMID:23202177

  16. Multipath estimation in urban environments from joint GNSS receivers and LiDAR sensors.

    PubMed

    Ali, Khurram; Chen, Xin; Dovis, Fabio; De Castro, David; Fernández, Antonio J

    2012-10-30

    In this paper, multipath error on Global Navigation Satellite System (GNSS) signals in urban environments is characterized with the help of Light Detection and Ranging (LiDAR) measurements. For this purpose, LiDAR equipment and Global Positioning System (GPS) receiver implementing a multipath estimating architecture were used to collect data in an urban environment. This paper demonstrates how GPS and LiDAR measurements can be jointly used to model the environment and obtain robust receivers. Multipath amplitude and delay are estimated by means of LiDAR feature extraction and multipath mitigation architecture. The results show the feasibility of integrating the information provided by LiDAR sensors and GNSS receivers for multipath mitigation.

  17. A 3D scaffold for ultra-sensitive reduced graphene oxide gas sensors.

    PubMed

    Yun, Yong Ju; Hong, Won G; Choi, Nak-Jin; Park, Hyung Ju; Moon, Seung Eon; Kim, Byung Hoon; Song, Ki-Bong; Jun, Yongseok; Lee, Hyung-Kun

    2014-06-21

    An ultra-sensitive gas sensor based on a reduced graphene oxide nanofiber mat was successfully fabricated using a combination of an electrospinning method and graphene oxide wrapping through an electrostatic self-assembly, followed by a low-temperature chemical reduction. The sensor showed excellent sensitivity to NO2 gas. PMID:24839129

  18. Development of 3D Force Sensors for Nanopositioning and Nanomeasuring Machine

    PubMed Central

    Tibrewala, Arti; Hofmann, Norbert; Phataralaoha, Anurak; Jäger, Gerd; Büttgenbach, Stephanus

    2009-01-01

    In this contribution, we report on different miniaturized bulk micro machined three-axes piezoresistive force sensors for nanopositioning and nanomeasuring machine (NPMM). Various boss membrane structures, such as one boss full/cross, five boss full/cross and swastika membranes, were used as a basic structure for the force sensors. All designs have 16 p-type diffused piezoresistors on the surface of the membrane. Sensitivities in x, y and z directions are measured. Simulated and measured stiffness ratio in horizontal to vertical direction is measured for each design. Effect of the length of the stylus on H:V stiffness ratio is studied. Minimum and maximum deflection and resonance frequency are measured for all designs. The sensors were placed in a nanopositioning and nanomeasuring machine and one point measurements were performed for all the designs. Lastly the application of the sensor is shown, where dimension of a cube is measured using the sensor. PMID:22412308

  19. A model and simulation to predict the performance of angle-angle-range 3D flash LADAR imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2005-10-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  20. A model and simulation to predict the performance of angle-angle-range 3D flash ladar imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2004-11-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  1. Development of lidar sensor for cloud-based measurements during convective conditions

    NASA Astrophysics Data System (ADS)

    Vishnu, R.; Bhavani Kumar, Y.; Rao, T. Narayana; Nair, Anish Kumar M.; Jayaraman, A.

    2016-05-01

    Atmospheric convection is a natural phenomena associated with heat transport. Convection is strong during daylight periods and rigorous in summer months. Severe ground heating associated with strong winds experienced during these periods. Tropics are considered as the source regions for strong convection. Formation of thunder storm clouds is common during this period. Location of cloud base and its associated dynamics is important to understand the influence of convection on the atmosphere. Lidars are sensitive to Mie scattering and are the suitable instruments for locating clouds in the atmosphere than instruments utilizing the radio frequency spectrum. Thunder storm clouds are composed of hydrometers and strongly scatter the laser light. Recently, a lidar technique was developed at National Atmospheric Research Laboratory (NARL), a Department of Space (DOS) unit, located at Gadanki near Tirupati. The lidar technique employs slant path operation and provides high resolution measurements on cloud base location in real-time. The laser based remote sensing technique allows measurement of atmosphere for every second at 7.5 m range resolution. The high resolution data permits assessment of updrafts at the cloud base. The lidar also provides real-time convective boundary layer height using aerosols as the tracers of atmospheric dynamics. The developed lidar sensor is planned for up-gradation with scanning facility to understand the cloud dynamics in the spatial direction. In this presentation, we present the lidar sensor technology and utilization of its technology for high resolution cloud base measurements during convective conditions over lidar site, Gadanki.

  2. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  3. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    PubMed Central

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  4. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I.-Ming

    2014-10-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  5. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  6. A 3D scaffold for ultra-sensitive reduced graphene oxide gas sensors

    NASA Astrophysics Data System (ADS)

    Yun, Yong Ju; Hong, Won G.; Choi, Nak-Jin; Park, Hyung Ju; Moon, Seung Eon; Kim, Byung Hoon; Song, Ki-Bong; Jun, Yongseok; Lee, Hyung-Kun

    2014-05-01

    An ultra-sensitive gas sensor based on a reduced graphene oxide nanofiber mat was successfully fabricated using a combination of an electrospinning method and graphene oxide wrapping through an electrostatic self-assembly, followed by a low-temperature chemical reduction. The sensor showed excellent sensitivity to NO2 gas.An ultra-sensitive gas sensor based on a reduced graphene oxide nanofiber mat was successfully fabricated using a combination of an electrospinning method and graphene oxide wrapping through an electrostatic self-assembly, followed by a low-temperature chemical reduction. The sensor showed excellent sensitivity to NO2 gas. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr00332b

  7. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    PubMed Central

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2016-01-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrate an approach for performance enhancement that exploits the high level of similitude and redundancy contained on the multidimensional information measured by distributed fibre sensors. Exploiting conventional image and video processing, an unprecedented boost in signal-to-noise ratio and measurement contrast is experimentally demonstrated. The method can be applied to any white-noise-limited distributed fibre sensor and can remarkably provide a 100-fold improvement in the sensor performance with no hardware modification. PMID:26927698

  8. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    NASA Astrophysics Data System (ADS)

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2016-03-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrate an approach for performance enhancement that exploits the high level of similitude and redundancy contained on the multidimensional information measured by distributed fibre sensors. Exploiting conventional image and video processing, an unprecedented boost in signal-to-noise ratio and measurement contrast is experimentally demonstrated. The method can be applied to any white-noise-limited distributed fibre sensor and can remarkably provide a 100-fold improvement in the sensor performance with no hardware modification.

  9. Pre-Processing of Point-Data from Contact and Optical 3D Digitization Sensors

    PubMed Central

    Budak, Igor; Vukelić, Djordje; Bračun, Drago; Hodolič, Janko; Soković, Mirko

    2012-01-01

    Contemporary 3D digitization systems employed by reverse engineering (RE) feature ever-growing scanning speeds with the ability to generate large quantity of points in a unit of time. Although advantageous for the quality and efficiency of RE modelling, the huge number of point datas can turn into a serious practical problem, later on, when the CAD model is generated. In addition, 3D digitization processes are very often plagued by measuring errors, which can be attributed to the very nature of measuring systems, various characteristics of the digitized objects and subjective errors by the operator, which also contribute to problems in the CAD model generation process. This paper presents an integral system for the pre-processing of point data, i.e., filtering, smoothing and reduction, based on a cross-sectional RE approach. In the course of the proposed system development, major emphasis was placed on the module for point data reduction, which was designed according to a novel approach with integrated deviation analysis and fuzzy logic reasoning. The developed system was verified through its application on three case studies, on point data from objects of versatile geometries obtained by contact and laser 3D digitization systems. The obtained results demonstrate the effectiveness of the system. PMID:22368513

  10. Functional calibration procedure for 3D knee joint angle description using inertial sensors.

    PubMed

    Favre, J; Aissaoui, R; Jolles, B M; de Guise, J A; Aminian, K

    2009-10-16

    Measurement of three-dimensional (3D) knee joint angle outside a laboratory is of benefit in clinical examination and therapeutic treatment comparison. Although several motion capture devices exist, there is a need for an ambulatory system that could be used in routine practice. Up-to-date, inertial measurement units (IMUs) have proven to be suitable for unconstrained measurement of knee joint differential orientation. Nevertheless, this differential orientation should be converted into three reliable and clinically interpretable angles. Thus, the aim of this study was to propose a new calibration procedure adapted for the joint coordinate system (JCS), which required only IMUs data. The repeatability of the calibration procedure, as well as the errors in the measurement of 3D knee angle during gait in comparison to a reference system were assessed on eight healthy subjects. The new procedure relying on active and passive movements reported a high repeatability of the mean values (offset<1 degrees) and angular patterns (SD<0.3 degrees and CMC>0.9). In comparison to the reference system, this functional procedure showed high precision (SD<2 degrees and CC>0.75) and moderate accuracy (between 4.0 degrees and 8.1 degrees) for the three knee angle. The combination of the inertial-based system with the functional calibration procedure proposed here resulted in a promising tool for the measurement of 3D knee joint angle. Moreover, this method could be adapted to measure other complex joint, such as ankle or elbow.

  11. 3D-calibration of three- and four-sensor hot-film probes based on collocated sonic using neural networks

    NASA Astrophysics Data System (ADS)

    Kit, Eliezer; Liberzon, Dan

    2016-09-01

    High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104-10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of motorized

  12. 3D-calibration of three- and four-sensor hot-film probes based on collocated sonic using neural networks

    NASA Astrophysics Data System (ADS)

    Kit, Eliezer; Liberzon, Dan

    2016-09-01

    High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23–41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23–41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104–10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of

  13. Development and validation of a 3D-printed interfacial stress sensor for prosthetic applications.

    PubMed

    Laszczak, P; Jiang, L; Bader, D L; Moser, D; Zahedi, S

    2015-01-01

    A novel capacitance-based sensor designed for monitoring mechanical stresses at the stump-socket interface of lower-limb amputees is described. It provides practical means of measuring pressure and shear stresses simultaneously. In particular, it comprises of a flexible frame (20 mm × 20 mm), with thickness of 4mm. By employing rapid prototyping technology in its fabrication, it offers a low-cost and versatile solution, with capability of adopting bespoke shapes of lower-limb residua. The sensor was first analysed using finite element analysis (FEA) and then evaluated using lab-based electromechanical tests. The results validate that the sensor is capable of monitoring both pressure and shear at stresses up to 350 kPa and 80 kPa, respectively. A post-signal processing model is developed to induce pressure and shear stresses, respectively. The effective separation of pressure and shear signals can be potentially advantageous for sensor calibration in clinical applications. The sensor also demonstrates high linearity (approx. 5-8%) and high pressure (approx. 1.3 kPa) and shear (approx. 0.6 kPa) stress resolution performance. Accordingly, the sensor offers the potential for exploitation as an assistive tool to both evaluate prosthetic socket fitting in clinical settings and alert amputees in home settings of excessive loading at the stump-socket interface, effectively preventing stump tissue breakdown at an early stage.

  14. Development and validation of a 3D-printed interfacial stress sensor for prosthetic applications.

    PubMed

    Laszczak, P; Jiang, L; Bader, D L; Moser, D; Zahedi, S

    2015-01-01

    A novel capacitance-based sensor designed for monitoring mechanical stresses at the stump-socket interface of lower-limb amputees is described. It provides practical means of measuring pressure and shear stresses simultaneously. In particular, it comprises of a flexible frame (20 mm × 20 mm), with thickness of 4mm. By employing rapid prototyping technology in its fabrication, it offers a low-cost and versatile solution, with capability of adopting bespoke shapes of lower-limb residua. The sensor was first analysed using finite element analysis (FEA) and then evaluated using lab-based electromechanical tests. The results validate that the sensor is capable of monitoring both pressure and shear at stresses up to 350 kPa and 80 kPa, respectively. A post-signal processing model is developed to induce pressure and shear stresses, respectively. The effective separation of pressure and shear signals can be potentially advantageous for sensor calibration in clinical applications. The sensor also demonstrates high linearity (approx. 5-8%) and high pressure (approx. 1.3 kPa) and shear (approx. 0.6 kPa) stress resolution performance. Accordingly, the sensor offers the potential for exploitation as an assistive tool to both evaluate prosthetic socket fitting in clinical settings and alert amputees in home settings of excessive loading at the stump-socket interface, effectively preventing stump tissue breakdown at an early stage. PMID:25455164

  15. Separating Leaves from Trunks and Branches with Dual-Wavelength Terrestrial Lidar Scanning: Improving Canopy Structure Characterization in 3-D Space

    NASA Astrophysics Data System (ADS)

    Li, Z.; Strahler, A. H.; Schaaf, C.; Howe, G.; Martel, J.; Hewawasam, K.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Paynter, I.; Saenz, E.; Wang, Z.; Yang, X.; Yao, T.; Zhao, F.; Woodcock, C.; Jupp, D.; Schaefer, M.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Leaf area index (LAI) is an important parameter characterizing forest structure, used in models regulating the exchange of carbon, water and energy between the land and the atmosphere. However, optical methods in common use cannot separate leaf area from the area of upper trunks and branches, and thus retrieve only plant area index (PAI), which is adjusted to LAI using an appropriate empirical woody-to-total index. An additional problem is that the angular distributions of leaf normals and normals to woody surfaces are quite different, and thus leafy and woody components project quite different areas with varying zenith angle of view. This effect also causes error in LAI retrieval using optical methods. Full-waveform scans at both the NIR (1064 nm) and SWIR (1548 nm) wavelengths from the new terrestrial Lidar, the Dual-Wavelength Echidna Lidar (DWEL), which pulses in both wavelengths simultaneously, easily separate returns of leaves from trunks and branches in 3-D space. In DWEL scans collected at two different forest sites, Sierra National Forest in June 2013 and Brisbane Karawatha Forest Park in July 2013, the power returned from leaves is similar to power returned from trunks/branches at the NIR wavelength, whereas the power returned from leaves is much lower (only about half as large) at the SWIR wavelength. At the SWIR wavelength, the leaf scattering is strongly attenuated by liquid water absorption. Normalized difference index (NDI) images from the waveform mean intensity at the two wavelengths demonstrate a clear contrast between leaves and trunks/branches. The attached image shows NDI from a part of a scan of an open red fir stand in the Sierra National Forest. Leaves appear light, while other objects are darker.Dual-wavelength point clouds generated from the full waveform data show weaker returns from leaves than from trunks/branches. A simple threshold classification of the NDI value of each scattering point readily separates leaves from trunks and

  16. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles.

    PubMed

    Yilmazoglu, O; Yadav, S; Cicek, D; Schneider, J J

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm(-1)) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ∼11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30-50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  17. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  18. Real-time processor for 3-D information extraction from image sequences by a moving area sensor

    NASA Astrophysics Data System (ADS)

    Hattori, Tetsuo; Nakada, Makoto; Kubo, Katsumi

    1990-11-01

    This paper presents a real time image processor for obtaining threedimensional( 3-D) distance information from image sequence caused by a moving area sensor. The processor has been developed for an automated visual inspection robot system (pilot system) with an autonomous vehicle which moves around avoiding obstacles in a power plant and checks whether there are defects or abnormal phenomena such as steam leakage from valves. The processor detects the distance between objects in the input image and the area sensor deciding corresponding points(pixels) between the first input image and the last one by tracing the loci of edges through the sequence of sixteen images. The hardware which plays an important role is two kinds of boards: mapping boards which can transform X-coordinate (horizontal direction) and Y-coordinate (vertical direction) for each horizontal row of images and a regional labelling board which extracts the connected loci of edges through image sequence. This paper also shows the whole processing flow of the distance detection algorithm. Since the processor can continuously process images ( 512x512x8 [pixels*bits per frame] ) at the NTSC video rate it takes about O. 7[sec] to measure the 3D distance by sixteen input images. The error rate of the measurement is maximum 10 percent when the area sensor laterally moves the range of 20 [centimeters] and when the measured scene including complicated background is at a distance of 4 [meters] from

  19. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  20. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  1. Integrating Dynamic Data and Sensors with Semantic 3D City Models in the Context of Smart Cities

    NASA Astrophysics Data System (ADS)

    Chaturvedi, K.; Kolbe, T. H.

    2016-10-01

    Smart cities provide effective integration of human, physical and digital systems operating in the built environment. The advancements in city and landscape models, sensor web technologies, and simulation methods play a significant role in city analyses and improving quality of life of citizens and governance of cities. Semantic 3D city models can provide substantial benefits and can become a central information backbone for smart city infrastructures. However, current generation semantic 3D city models are static in nature and do not support dynamic properties and sensor observations. In this paper, we propose a new concept called Dynamizer allowing to represent highly dynamic data and providing a method for injecting dynamic variations of city object properties into the static representation. The approach also provides direct capability to model complex patterns based on statistics and general rules and also, real-time sensor observations. The concept is implemented as an Application Domain Extension for the CityGML standard. However, it could also be applied to other GML-based application schemas including the European INSPIRE data themes and national standards for topography and cadasters like the British Ordnance Survey Mastermap or the German cadaster standard ALKIS.

  2. A Robust MEMS Based Multi-Component Sensor for 3D Borehole Seismic Arrays

    SciTech Connect

    Paulsson Geophysical Services

    2008-03-31

    The objective of this project was to develop, prototype and test a robust multi-component sensor that combines both Fiber Optic and MEMS technology for use in a borehole seismic array. The use such FOMEMS based sensors allows a dramatic increase in the number of sensors that can be deployed simultaneously in a borehole seismic array. Therefore, denser sampling of the seismic wave field can be afforded, which in turn allows us to efficiently and adequately sample P-wave as well as S-wave for high-resolution imaging purposes. Design, packaging and integration of the multi-component sensors and deployment system will target maximum operating temperature of 350-400 F and a maximum pressure of 15000-25000 psi, thus allowing operation under conditions encountered in deep gas reservoirs. This project aimed at using existing pieces of deployment technology as well as MEMS and fiber-optic technology. A sensor design and analysis study has been carried out and a laboratory prototype of an interrogator for a robust borehole seismic array system has been assembled and validated.

  3. 3D-information fusion from very high resolution satellite sensors

    NASA Astrophysics Data System (ADS)

    Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.

    2015-04-01

    In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.

  4. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  5. 3D integration technology for sensor application using less than 5μm-pitch gold cone-bump connpdfection

    NASA Astrophysics Data System (ADS)

    Motoyoshi, M.; Miyoshi, T.; Ikebec, M.; Arai, Y.

    2015-03-01

    Three-dimensional (3D) integrated circuit (IC) technology is an effective solution to reduce the manufacturing costs of advanced two-dimensional (2D) large-scale integration (LSI) while ensuring equivalent device performance and functionalities. This technology allows a new device architecture using stacked detector/sensor devices with a small dead sensor area and high-speed operation that facilitates hyper-parallel data processing. In pixel detectors or focal-plane sensor devices, each pixel area must accommodate many transistors without increasing the pixel size. Consequently, many methods to realize 3D-LSI devices have been developed to meet this requirement by focusing on the unit processes of 3D-IC technology, such as through-silicon via formation and electrical and mechanical bonding between tiers of the stack. The bonding process consists of several unit processes such as bump or metal contact formation, chip/wafer alignment, chip/wafer bonding, and underfill formation; many process combinations have been reported. Our research focuses on a versatile bonding technology for silicon LSI, compound semiconductor, and microelectromechanical system devices at temperatures of less than 200oC for heterogeneous integration. A gold (Au) cone bump formed by nanoparticle deposition is one of the promising candidates for this purpose. This paper presents the experimental result of a fabricated prototype with 3-μm-diameter Au cone-bump connections with adhesive injection, and compares it with that of an indium microbump (μ-bump). The resistance of the 3-μm-diameter Au cone bump is approximately 6 Ω. We also investigated the influence of stress caused by the bump junction on the MOS characteristics.

  6. A model and simulation to predict 3D imaging LADAR sensor systems performance in real-world type environments

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Russo, Leonard E.

    2006-08-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. Accurate methods to model and simulate performance from 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation here is developed expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) detector noise figure; 4) detector gain; 5) target attributes; 6) atmospheric transmission; 7) atmospheric backscatter; 8) atmospheric turbulence; 9) obscurants; 10) obscurant path length, and; 11) platform motion. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel. Here, noise sources and gain are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel for the entire array. Model outputs are 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array.

  7. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  8. Lidar Sensor Performance in Closed-Loop Flight Testing of the Morpheus Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, Vincent E.; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.; Bulyshev, Alexander E.

    2015-01-01

    For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, guide the Morpheus autonomous, rocket-propelled, free-flying test bed to a safe landing on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging flash lidar is a second generation, compact, real-time, air-cooled instrument developed from a number of cutting-edge components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The flash lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision at 1 sigma. The flash lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Doppler Lidar system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The Doppler Lidar's measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter, also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the flash lidar, can provide range along a separate vector. The Laser Altimeter measurements are also

  9. Reducing the influence of direct reflection on return signal detection in a 3D imaging lidar system by rotating the polarizing beam splitter.

    PubMed

    Wang, Chunhui; Lee, Xiaobao; Cui, Tianxiang; Qu, Yang; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-03-01

    The direction rule of the laser beam traveling through a deflected polarizing beam splitter (PBS) cube is derived. It reveals that, due to the influence of end-face reflection of the PBS at the detector side, the emergent beam coming from the incident beam parallels the direction of the original case without rotation, with only a very small translation interval between them. The formula of the translation interval is also given. Meanwhile, the emergent beam from the return signal at the detector side deflects at an angle twice that of the PBS rotation angle. The correctness has been verified by an experiment. The intensity transmittance of the emergent beam when propagating in the PBS is changes very little if the rotation angle is less than 35 deg. In a 3D imaging lidar system, by rotating the PBS cube by an angle, the direction of the return signal optical axis is separated from that of the origin, which can decrease or eliminate the influence of direct reflection caused by the prism end face on target return signal detection. This has been checked by experiment. PMID:26974613

  10. Development of a novel pixel-level signal processing chain for fast readout 3D integrated CMOS pixel sensors

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Torheim, O.; Hu-Guo, C.; Degerli, Y.; Hu, Y.

    2013-03-01

    In order to resolve the inherent readout speed limitation of traditional 2D CMOS pixel sensors, operated in rolling shutter readout, a parallel readout architecture has been developed by taking advantage of 3D integration technologies. Since the rows of the pixel array are zero-suppressed simultaneously instead of sequentially, a frame readout time of a few microseconds is expected for coping with high hit rates foreseen in future collider experiments. In order to demonstrate the pixel readout functionality of such a pixel sensor, a 2D proof-of-concept chip including a novel pixel-level signal processing chain was designed and fabricated in a 0.13 μm CMOS technology. The functionalities of this chip have been verified through experimental characterization.

  11. Characterization of the first double-sided 3D radiation sensors fabricated at FBK on 6-inch silicon wafers

    NASA Astrophysics Data System (ADS)

    Sultan, D. M. S.; Mendicino, R.; Boscardin, M.; Ronchin, S.; Zorzi, N.; Dalla Betta, G.-F.

    2015-12-01

    Following 3D pixel sensor production for the ATLAS Insertable B-Layer, Fondazione Bruno Kessler (FBK) fabrication facility has recently been upgraded to process 6-inch wafers. In 2014, a test batch was fabricated to check for possible issues relevant to this upgrade. While maintaining a double-sided fabrication technology, some process modifications have been investigated. We report here on the technology and the design of this batch, and present selected results from the electrical characterization of sensors and test structures. Notably, the breakdown voltage is shown to exceed 200 V before irradiation, much higher than in earlier productions, demonstrating robustness in terms of radiation hardness for forthcoming productions aimed at High Luminosity LHC upgrades.

  12. Reverse engineering physical models employing a sensor integration between 3D stereo detection and contact digitization

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Lin, Grier C. I.

    1997-12-01

    A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.

  13. A method of improving the dynamic response of 3D force/torque sensors

    NASA Astrophysics Data System (ADS)

    Osypiuk, Rafał; Piskorowski, Jacek; Kubus, Daniel

    2016-02-01

    In the paper attention is drawn to adverse dynamic properties of filters implemented in commercial measurement systems, force/torque sensors, which are increasingly used in industrial robotics. To remedy the problem, it has been proposed to employ a time-variant filter with appropriately modulated parameters, owing to which it is possible to suppress the amplitude of the transient response and, at the same time, to increase the pulsation of damped oscillations; this results in the improvement of dynamic properties in terms of reducing the duration of transients. This property plays a key role in force control and in the fundamental problem of the robot establishing contact with rigid environment. The parametric filters have been verified experimentally and compared with filters available for force/torque sensors manufactured by JR3. The obtained results clearly indicate the advantages of the proposed solution, which may be an interesting alternative to the classic methods of filtration.

  14. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites.

    PubMed

    Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin

    2015-01-01

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168

  15. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    PubMed Central

    Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin

    2015-01-01

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168

  16. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites.

    PubMed

    Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin

    2015-12-05

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.

  17. Modular optical topometric sensor for 3D acquisition of human body surfaces and long-term monitoring of variations.

    PubMed

    Bischoff, Guido; Böröcz, Zoltan; Proll, Christian; Kleinheinz, Johannes; von Bally, Gert; Dirksen, Dieter

    2007-08-01

    Optical topometric 3D sensors such as laser scanners and fringe projection systems allow detailed digital acquisition of human body surfaces. For many medical applications, however, not only the current shape is important, but also its changes, e.g., in the course of surgical treatment. In such cases, time delays of several months between subsequent measurements frequently occur. A modular 3D coordinate measuring system based on the fringe projection technique is presented that allows 3D coordinate acquisition including calibrated color information, as well as the detection and visualization of deviations between subsequent measurements. In addition, parameters describing the symmetry of body structures are determined. The quantitative results of the analysis may be used as a basis for objective documentation of surgical therapy. The system is designed in a modular way, and thus, depending on the object of investigation, two or three cameras with different capabilities in terms of resolution and color reproduction can be utilized to optimize the set-up.

  18. Discriminating crop, weeds and soil surface with a terrestrial LIDAR sensor.

    PubMed

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolá, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-10-29

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12-14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying.

  19. Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor

    PubMed Central

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolà, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-01-01

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying. PMID:24172283

  20. Discriminating crop, weeds and soil surface with a terrestrial LIDAR sensor.

    PubMed

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolá, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-01-01

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12-14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying. PMID:24172283

  1. An analogue contact probe using a compact 3D optical sensor for micro/nano coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Fan, Kuang-Chao; Miao, Jin-Wei; Huang, Qiang-Xian; Tao, Sheng; Gong, Er-min

    2014-09-01

    This paper presents a new analogue contact probe based on a compact 3D optical sensor with high precision. The sensor comprises an autocollimator and a polarizing Michelson interferometer, which can detect two angles and one displacement of the plane mirror at the same time. In this probe system, a tungsten stylus with a ruby tip-ball is attached to a floating plate, which is supported by four V-shape leaf springs fixed to the outer case. When a contact force is applied to the tip, the leaf springs will experience elastic deformation and the plane mirror mounted on the floating plate will be displaced. The force-motion characteristics of this probe were investigated and optimum parameters were obtained with the constraint of allowable physical size of the probe. Simulation results show that the probe is uniform in 3D and its contacting force gradient is within 1 mN µm - 1. Experimental results indicate that the probe has 1 nm resolution,  ± 10 µm measuring range in X - Y plane, 10 µm measuring range in Z direction and within 30 nm measuring standard deviation. The feasibility of the probe has been preliminarily verified by testing the flatness and step height of high precision gauge blocks.

  2. Use of a Terrestrial LIDAR Sensor for Drift Detection in Vineyard Spraying

    PubMed Central

    Gil, Emilio; Llorens, Jordi; Llop, Jordi; Fàbregas, Xavier; Gallart, Montserrat

    2013-01-01

    The use of a scanning Light Detection and Ranging (LIDAR) system to characterize drift during pesticide application is described. The LIDAR system is compared with an ad hoc test bench used to quantify the amount of spray liquid moving beyond the canopy. Two sprayers were used during the field test; a conventional mist blower at two air flow rates (27,507 and 34,959 m3·h−1) equipped with two different nozzle types (conventional and air injection) and a multi row sprayer with individually oriented air outlets. A simple model based on a linear function was used to predict spray deposit using LIDAR measurements and to compare with the deposits measured over the test bench. Results showed differences in the effectiveness of the LIDAR sensor depending on the sprayed droplet size (nozzle type) and air intensity. For conventional mist blower and low air flow rate; the sensor detects a greater number of drift drops obtaining a better correlation (r = 0.91; p < 0.01) than for the case of coarse droplets or high air flow rate. In the case of the multi row sprayer; drift deposition in the test bench was very poor. In general; the use of the LIDAR sensor presents an interesting and easy technique to establish the potential drift of a specific spray situation as an adequate alternative for the evaluation of drift potential. PMID:23282583

  3. Use of a terrestrial LIDAR sensor for drift detection in vineyard spraying.

    PubMed

    Gil, Emilio; Llorens, Jordi; Llop, Jordi; Fàbregas, Xavier; Gallart, Montserrat

    2013-01-02

    The use of a scanning Light Detection and Ranging (LIDAR) system to characterize drift during pesticide application is described. The LIDAR system is compared with an ad hoc test bench used to quantify the amount of spray liquid moving beyond the canopy. Two sprayers were used during the field test; a conventional mist blower at two air flow rates (27,507 and 34,959 m3·h(-1)) equipped with two different nozzle types (conventional and air injection) and a multi row sprayer with individually oriented air outlets. A simple model based on a linear function was used to predict spray deposit using LIDAR measurements and to compare with the deposits measured over the test bench. Results showed differences in the effectiveness of the LIDAR sensor depending on the sprayed droplet size (nozzle type) and air intensity. For conventional mist blower and low air flow rate; the sensor detects a greater number of drift drops obtaining a better correlation (r = 0.91; p < 0.01) than for the case of coarse droplets or high air flow rate. In the case of the multi row sprayer; drift deposition in the test bench was very poor. In general; the use of the LIDAR sensor presents an interesting and easy technique to establish the potential drift of a specific spray situation as an adequate alternative for the evaluation of drift potential.

  4. Use of a terrestrial LIDAR sensor for drift detection in vineyard spraying.

    PubMed

    Gil, Emilio; Llorens, Jordi; Llop, Jordi; Fàbregas, Xavier; Gallart, Montserrat

    2013-01-01

    The use of a scanning Light Detection and Ranging (LIDAR) system to characterize drift during pesticide application is described. The LIDAR system is compared with an ad hoc test bench used to quantify the amount of spray liquid moving beyond the canopy. Two sprayers were used during the field test; a conventional mist blower at two air flow rates (27,507 and 34,959 m3·h(-1)) equipped with two different nozzle types (conventional and air injection) and a multi row sprayer with individually oriented air outlets. A simple model based on a linear function was used to predict spray deposit using LIDAR measurements and to compare with the deposits measured over the test bench. Results showed differences in the effectiveness of the LIDAR sensor depending on the sprayed droplet size (nozzle type) and air intensity. For conventional mist blower and low air flow rate; the sensor detects a greater number of drift drops obtaining a better correlation (r = 0.91; p < 0.01) than for the case of coarse droplets or high air flow rate. In the case of the multi row sprayer; drift deposition in the test bench was very poor. In general; the use of the LIDAR sensor presents an interesting and easy technique to establish the potential drift of a specific spray situation as an adequate alternative for the evaluation of drift potential. PMID:23282583

  5. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  6. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  7. Development of Lidar Sensor Systems for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierottet, Diego F.; Petway, Larry B.; Vanek, Michael D.

    2010-01-01

    Lidar has been identified by NASA as a key technology for enabling autonomous safe landing of future robotic and crewed lunar landing vehicles. NASA LaRC has been developing three laser/lidar sensor systems under the ALHAT project. The capabilities of these Lidar sensor systems were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard helicopters and a fixed wing aircraft. The airborne tests were performed over Moon-like terrain in the California and Nevada deserts. These tests provided the necessary data for the development of signal processing software, and algorithms for hazard detection and navigation. The tests helped identify technology areas needing improvement and will also help guide future technology advancement activities.

  8. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    PubMed

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-12-29

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy.

  9. A Compact 3D Omnidirectional Range Sensor of High Resolution for Robust Reconstruction of Environments

    PubMed Central

    Marani, Roberto; Renò, Vito; Nitti, Massimiliano; D'Orazio, Tiziana; Stella, Ettore

    2015-01-01

    In this paper, an accurate range sensor for the three-dimensional reconstruction of environments is designed and developed. Following the principles of laser profilometry, the device exploits a set of optical transmitters able to project a laser line on the environment. A high-resolution and high-frame-rate camera assisted by a telecentric lens collects the laser light reflected by a parabolic mirror, whose shape is designed ad hoc to achieve a maximum measurement error of 10 mm when the target is placed 3 m away from the laser source. Measurements are derived by means of an analytical model, whose parameters are estimated during a preliminary calibration phase. Geometrical parameters, analytical modeling and image processing steps are validated through several experiments, which indicate the capability of the proposed device to recover the shape of a target with high accuracy. Experimental measurements show Gaussian statistics, having standard deviation of 1.74 mm within the measurable range. Results prove that the presented range sensor is a good candidate for environmental inspections and measurements. PMID:25621605

  10. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters

    PubMed Central

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments’ performance and survey accuracy. PMID:26729117

  11. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    PubMed

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy. PMID:26729117

  12. A robust method to detect zero velocity for improved 3D personal navigation using inertial sensors.

    PubMed

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  13. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility

    PubMed Central

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-01-01

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373

  14. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility.

    PubMed

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-01-01

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373

  15. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility.

    PubMed

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-03-19

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability.

  16. Non-Enzymatic Glucose Sensor Based on 3D Graphene Oxide Hydrogel Crosslinked by Various Diamines.

    PubMed

    Hoa, Le Thuy; Hur, Seung Hyun

    2015-11-01

    The non-enzymatic glucose sensor was fabricated by well-controlled and chemically crosslinked graphene oxide hydrogels (GOHs). By using various diamines such as ethylenediamine (EDA), p-phenylene diamine (pPDA) and o-phenylene diamine (oPDA) that have different amine to amine distance, we can control the structures of GOHs such as surface area and pore volume. The pPDA-GOH fabricated by pPDA exhibited the largest surface area and pore volume due to its longest amine to amine distance, which resulted in highest sensitivity in glucose and other monosaccharide sensing such as fructose (C6H12O6), galactose (C6H12O6) and sucrose (C12H22O11). It also showed fast and wide range glucose sensing ability in the amperometric test, and an excellent selectivity toward other interference species such as an Ascorbic acid. PMID:26726578

  17. Multi-sensor super-resolution for hybrid range imaging with application to 3-D endoscopy and open surgery.

    PubMed

    Köhler, Thomas; Haase, Sven; Bauer, Sebastian; Wasza, Jakob; Kilgus, Thomas; Maier-Hein, Lena; Stock, Christian; Hornegger, Joachim; Feußner, Hubertus

    2015-08-01

    In this paper, we propose a multi-sensor super-resolution framework for hybrid imaging to super-resolve data from one modality by taking advantage of additional guidance images of a complementary modality. This concept is applied to hybrid 3-D range imaging in image-guided surgery, where high-quality photometric data is exploited to enhance range images of low spatial resolution. We formulate super-resolution based on the maximum a-posteriori (MAP) principle and reconstruct high-resolution range data from multiple low-resolution frames and complementary photometric information. Robust motion estimation as required for super-resolution is performed on photometric data to derive displacement fields of subpixel accuracy for the associated range images. For improved reconstruction of depth discontinuities, a novel adaptive regularizer exploiting correlations between both modalities is embedded to MAP estimation. We evaluated our method on synthetic data as well as ex-vivo images in open surgery and endoscopy. The proposed multi-sensor framework improves the peak signal-to-noise ratio by 2 dB and structural similarity by 0.03 on average compared to conventional single-sensor approaches. In ex-vivo experiments on porcine organs, our method achieves substantial improvements in terms of depth discontinuity reconstruction.

  18. Direct Growth of Graphene Films on 3D Grating Structural Quartz Substrates for High-Performance Pressure-Sensitive Sensors.

    PubMed

    Song, Xuefen; Sun, Tai; Yang, Jun; Yu, Leyong; Wei, Dacheng; Fang, Liang; Lu, Bin; Du, Chunlei; Wei, Dapeng

    2016-07-01

    Conformal graphene films have directly been synthesized on the surface of grating microstructured quartz substrates by a simple chemical vapor deposition process. The wonderful conformality and relatively high quality of the as-prepared graphene on the three-dimensional substrate have been verified by scanning electron microscopy and Raman spectra. This conformal graphene film possesses excellent electrical and optical properties with a sheet resistance of <2000 Ω·sq(-1) and a transmittance of >80% (at 550 nm), which can be attached with a flat graphene film on a poly(dimethylsiloxane) substrate, and then could work as a pressure-sensitive sensor. This device possesses a high-pressure sensitivity of -6.524 kPa(-1) in a low-pressure range of 0-200 Pa. Meanwhile, this pressure-sensitive sensor exhibits super-reliability (≥5000 cycles) and an ultrafast response time (≤4 ms). Owing to these features, this pressure-sensitive sensor based on 3D conformal graphene is adequately introduced to test wind pressure, expressing higher accuracy and a lower background noise level than a market anemometer. PMID:27269362

  19. Estimation of aboveground biomass in forests using multi-sensor (LIDAR, IFSAR, ETM+) fusion

    NASA Astrophysics Data System (ADS)

    Hyde, P.; Dubuyah, R.; Blair, B.; Hofton, M.; Hunsaker, C.; Pierce, L.; Walker, W.

    2002-05-01

    Aboveground biomass in forests, or the dry weight of standing trees, is a key ecosystem parameter for carbon dynamics, fire modeling, and biodiversity studies. Field-based assessments are expensive and methods to scale from field plots to landscapes are not generally accepted. Remote sensing potentially provides a cost-effective alternative, but no single sensor has yet to provide accurate, consistent estimates in all biomes. Passive optical sensors and synthetic aperture radar (SAR) have been proven effective only in young, structurally simple forests. Light detecting and ranging (LIDAR) has been effective in old-growth, structurally complex forests, but data are not widely available. Combining information from these sensors will leverage the high information content, high cost LIDAR data with lower cost, more widely available SAR and passive optical data. In this study, Landsat ETM+, x-band interferometric SAR, and airborne LIDAR from the Laser Vegetation Imaging Sensor (LVIS) were statistically fused using a decision tree classifier and compared to field-based estimates of biomass in Sierra National Forest, CA, USA. Biomass estimates derived from all sensors combined were more accurate than those derived from any single sensor.

  20. Retrieval of Vegetation Structure and Carbon Balance Parameters Using Ground-Based Lidar and Scaling to Airborne and Spaceborne Lidar Sensors

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Ni-Meister, W.; Woodcock, C. E.; Li, X.; Jupp, D. L.; Culvenor, D.

    2006-12-01

    This research uses a ground-based, upward hemispherical scanning lidar to retrieve forest canopy structural information, including tree height, mean tree diameter, basal area, stem count density, crown diameter, woody biomass, and green biomass. These parameters are then linked to airborne and spaceborne lidars to provide large-area mapping of structural and biomass parameters. The terrestrial lidar instrument, Echidna(TM), developed by CSIRO Australia, allows rapid acquisition of vegetation structure data that can be readily integrated with downward-looking airborne lidar, such as LVIS (Laser Vegetation Imaging Sensor), and spaceborne lidar, such as GLAS (Geoscience Laser Altimeter System) on ICESat. Lidar waveforms and vegetation structure are linked for these three sensors through the hybrid geometric-optical radiative-transfer (GORT) model, which uses basic vegetation structure parameters and principles of geometric optics, coupled with radiative transfer theory, to model scattering and absorption of light by collections of individual plant crowns. Use of a common model for lidar waveforms at ground, airborne, and spaceborne levels facilitates integration and scaling of the data to provide large-area maps and inventories of vegetation structure and carbon stocks. Our research plan includes acquisition of Echidna(TM) under-canopy hemispherical lidar scans at North American test sites where LVIS and GLAS data have been or are being acquired; analysis and modeling of spatially coincident lidar waveforms acquired by the three sensor systems; linking of the three data sources using the GORT model; and mapping of vegetation structure and carbon-balance parameters at LVIS and GLAS resolutions based on Echidna(TM) measurements.

  1. Lidar.

    PubMed

    Collis, R T

    1970-08-01

    Lidar uses laser energy in radar fashion to observe atmospheric backscattering as a function of range. The concomitant attenuation of energy along the intervening path complicates the evaluation of the observations, but even on a qualitative basis the delineation of clouds or of structure in the apparently clear air is of considerable value in operational meteorology and atmospheric research. Under certain conditions the atmosphere's optical parameters may be evaluated and related to meteorologically significant characteristics. Advanced techniques based on resonant absorption and Raman shift back- scattering are briefly noted. The current attainment and future prospects of lidar are reviewed.

  2. On-machine measurement of the grinding wheels' 3D surface topography using a laser displacement sensor

    NASA Astrophysics Data System (ADS)

    Pan, Yongcheng; Zhao, Qingliang; Guo, Bing

    2014-08-01

    A method of non-contact, on-machine measurement of three dimensional surface topography of grinding wheels' whole surface was developed in this paper, focusing on an electroplated coarse-grained diamond grinding wheel. The measuring system consists of a Keyence laser displacement sensor, a Keyence controller and a NI PCI-6132 data acquisition card. A resolution of 0.1μm in vertical direction and 8μm in horizontal direction could be achieved. After processing the data by LabVIEW and MATLAB, the 3D topography of the grinding wheel's whole surface could be reconstructed. When comparing the reconstructed 3D topography of the grinding wheel's marked area to its real topography captured by a high-depth-field optical digital microscope (HDF-ODM) and scanning electron microscope (SEM), they were very similar to each other, proving that this method is accurate and effective. By a subsequent data processing, the topography of every grain could be extracted and then the active grain number, the active grain volume and the active grain's bearing ration could be calculated. These three parameters could serve as the criterion to evaluate the grinding performance of coarse-grained diamond grinding wheels. Then the performance of the grinding wheel could be evaluated on-machine accurately and quantitatively.

  3. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  4. 3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot.

    PubMed

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly "coupled" as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a "zero" or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  5. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    PubMed Central

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  6. 3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot.

    PubMed

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly "coupled" as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a "zero" or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy.

  7. a New Automatic System Calibration of Multi-Cameras and LIDAR Sensors

    NASA Astrophysics Data System (ADS)

    Hassanein, M.; Moussa, A.; El-Sheimy, N.

    2016-06-01

    In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated calibration without

  8. Ultrasonic and LIDAR Sensors for Electronic Canopy Characterization in Vineyards: Advances to Improve Pesticide Application Methods

    PubMed Central

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Escolà, Alexandre

    2011-01-01

    Canopy characterization is a key factor to improve pesticide application methods in tree crops and vineyards. Development of quick, easy and efficient methods to determine the fundamental parameters used to characterize canopy structure is thus an important need. In this research the use of ultrasonic and LIDAR sensors have been compared with the traditional manual and destructive canopy measurement procedure. For both methods the values of key parameters such as crop height, crop width, crop volume or leaf area have been compared. Obtained results indicate that an ultrasonic sensor is an appropriate tool to determine the average canopy characteristics, while a LIDAR sensor provides more accuracy and detailed information about the canopy. Good correlations have been obtained between crop volume (CVU) values measured with ultrasonic sensors and leaf area index, LAI (R2 = 0.51). A good correlation has also been obtained between the canopy volume measured with ultrasonic and LIDAR sensors (R2 = 0.52). Laser measurements of crop height (CHL) allow one to accurately predict the canopy volume. The proposed new technologies seems very appropriate as complementary tools to improve the efficiency of pesticide applications, although further improvements are still needed. PMID:22319405

  9. Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform

    PubMed Central

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K.

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  10. Rendezvous lidar sensor system for terminal rendezvous, capture, and berthing to the International Space Station

    NASA Astrophysics Data System (ADS)

    Allen, Andrew C. M.; Langley, Christopher; Mukherji, Raja; Taylor, Allen B.; Umasuthan, Manickam; Barfoot, Timothy D.

    2008-04-01

    The Rendezvous Lidar System (RLS), a high-performance scanning time-of-flight lidar jointly developed by MDA and Optech, was employed successfully during the XSS-11 spacecraft's 23-month mission. Ongoing development of the RLS mission software has resulted in an integrated pose functionality suited to safety-critical applications, specifically the terminal rendezvous of a visiting vehicle with the International Space Station (ISS). This integrated pose capability extends the contribution of the lidar from long-range acquisition and tracking for terminal rendezvous through to final alignment for docking or berthing. Innovative aspects of the technology that were developed include: 1) efficacious algorithms to detect, recognize, and compute the pose of a client spacecraft from a single scan using an intelligent search of candidate solutions, 2) automatic scene evaluation and feature selection algorithms and software that assist mission planners in specifying accurate and robust scan scheduling, and 3) optimal pose tracking functionality using knowledge of the relative spacecraft states. The development process incorporated the concept of sensor system bandwidth to address the sometimes unclear or misleading specifications of update rate and measurement delay often cited for rendezvous sensors. Because relative navigation sensors provide the measured feedback to the spacecraft GN&C, we propose a new method of specifying the performance of these sensors to better enable a full assessment of a given sensor in the closed-loop control for any given vehicle. This approach, and the tools and methods enabling it, permitted a rapid and rigorous development and verification of the pose tracking functionality. The complete system was then integrated and demonstrated in the MDA space vision facility using the flight-representative engineering model RLS lidar sensor.

  11. Capturing 3D resistivity of semi-arid karstic subsurface in varying moisture conditions using a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Barnhart, K.; Oden, C. P.

    2012-12-01

    The dissolution of soluble bedrock results in surface and subterranean karst channels, which comprise 7-10% of the dry earth's surface. Karst serves as a preferential conduit to focus surface and subsurface water but it is difficult to exploit as a water resource or protect from pollution because of irregular structure and nonlinear hydrodynamic behavior. Geophysical characterization of karst commonly employs resistivity and seismic methods, but difficulties arise due to low resistivity contrast in arid environments and insufficient resolution of complex heterogeneous structures. To help reduce these difficulties, we employ a state-of-the-art wireless geophysical sensor array, which combines low-power radio telemetry and solar energy harvesting to enable long-term in-situ monitoring. The wireless aspect removes topological constraints common with standard wired resistivity equipment, which facilitates better coverage and/or sensor density to help improve aspect ratio and resolution. Continuous in-situ deployment allows data to be recorded according to nature's time scale; measurements are made during infrequent precipitation events which can increase resistivity contrast. The array is coordinated by a smart wireless bridge that continuously monitors local soil moisture content to detect when precipitation occurs, schedules resistivity surveys, and periodically relays data to the cloud via 3G cellular service. Traditional 2/3D gravity and seismic reflection surveys have also been conducted to clarify and corroborate results.

  12. Triboelectric nanogenerator built on suspended 3D spiral structure as vibration and positioning sensor and wave energy harvester.

    PubMed

    Hu, Youfan; Yang, Jin; Jing, Qingshen; Niu, Simiao; Wu, Wenzhuo; Wang, Zhong Lin

    2013-11-26

    An unstable mechanical structure that can self-balance when perturbed is a superior choice for vibration energy harvesting and vibration detection. In this work, a suspended 3D spiral structure is integrated with a triboelectric nanogenerator (TENG) for energy harvesting and sensor applications. The newly designed vertical contact-separation mode TENG has a wide working bandwidth of 30 Hz in low-frequency range with a maximum output power density of 2.76 W/m(2) on a load of 6 MΩ. The position of an in-plane vibration source was identified by placing TENGs at multiple positions as multichannel, self-powered active sensors, and the location of the vibration source was determined with an error less than 6%. The magnitude of the vibration is also measured by the output voltage and current signal of the TENG. By integrating the TENG inside a buoy ball, wave energy harvesting at water surface has been demonstrated and used for lighting illumination light, which shows great potential applications in marine science and environmental/infrastructure monitoring.

  13. FLASH LIDAR Based Relative Navigation

    NASA Technical Reports Server (NTRS)

    Brazzel, Jack; Clark, Fred; Milenkovic, Zoran

    2014-01-01

    Relative navigation remains the most challenging part of spacecraft rendezvous and docking. In recent years, flash LIDARs, have been increasingly selected as the go-to sensors for proximity operations and docking. Flash LIDARS are generally lighter and require less power that scanning Lidars. Flash LIDARs do not have moving parts, and they are capable of tracking multiple targets as well as generating a 3D map of a given target. However, there are some significant drawbacks of Flash Lidars that must be resolved if their use is to be of long-term significance. Overcoming the challenges of Flash LIDARs for navigation-namely, low technology readiness level, lack of historical performance data, target identification, existence of false positives, and performance of vision processing algorithms as intermediaries between the raw sensor data and the Kalman filter-requires a world-class testing facility, such as the Lockheed Martin Space Operations Simulation Center (SOSC). Ground-based testing is a critical step for maturing the next-generation flash LIDAR-based spacecraft relative navigation. This paper will focus on the tests of an integrated relative navigation system conducted at the SOSC in January 2014. The intent of the tests was to characterize and then improve the performance of relative navigation, while addressing many of the flash LIDAR challenges mentioned above. A section on navigation performance and future recommendation completes the discussion.

  14. Using Arduinos and 3D-printers to Build Research-grade Weather Stations and Environmental Sensors

    NASA Astrophysics Data System (ADS)

    Ham, J. M.

    2013-12-01

    Many plant, soil, and surface-boundary-layer processes in the geosphere are governed by the microclimate at the land-air interface. Environmental monitoring is needed at smaller scales and higher frequencies than provided by existing weather monitoring networks. The objective of this project was to design, prototype, and test a research-grade weather station that is based on open-source hardware/software and off-the-shelf components. The idea is that anyone could make these systems with only elementary skills in fabrication and electronics. The first prototypes included measurements of air temperature, humidity, pressure, global irradiance, wind speed, and wind direction. The best approach for measuring precipitation is still being investigated. The data acquisition system was deigned around the Arduino microcontroller and included an LCD-based user interface, SD card data storage, and solar power. Sensors were sampled at 5 s intervals and means, standard deviations, and maximum/minimums were stored at user-defined intervals (5, 30, or 60 min). Several of the sensor components were printed in plastic using a hobby-grade 3D printer (e.g., RepRap Project). Both passive and aspirated radiation shields for measuring air temperature were printed in white Acrylonitrile Butadiene Styrene (ABS). A housing for measuring solar irradiance using a photodiode-based pyranometer was printed in opaque ABS. The prototype weather station was co-deployed with commercial research-grade instruments at an agriculture research unit near Fort Collins, Colorado, USA. Excellent agreement was found between Arduino-based system and commercial weather instruments. The technology was also used to support air quality research and automated air sampling. The next step is to incorporate remote access and station-to-station networking using Wi-Fi, cellular phone, and radio communications (e.g., Xbee).

  15. Relative Navigation Light Detection and Ranging (LIDAR) Sensor Development Test Objective (DTO) Performance Verification

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2013-01-01

    The NASA Engineering and Safety Center (NESC) received a request from the NASA Associate Administrator (AA) for Human Exploration and Operations Mission Directorate (HEOMD), to quantitatively evaluate the individual performance of three light detection and ranging (LIDAR) rendezvous sensors flown as orbiter's development test objective on Space Transportation System (STS)-127, STS-133, STS-134, and STS-135. This document contains the outcome of the NESC assessment.

  16. Doppler Lidar Sensor for Precision Navigation in GPS-Deprived Environment

    NASA Technical Reports Server (NTRS)

    Amzajerdian, F.; Pierrottet, D. F.; Hines, G. D.; Hines, G. D.; Petway, L. B.; Barnes, B. W.

    2013-01-01

    Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle's Inertial Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration onboard a rocket-powered terrestrial free-flyer vehicle.

  17. Compact Optical Fiber 3D Shape Sensor Based on a Pair of Orthogonal Tilted Fiber Bragg Gratings.

    PubMed

    Feng, Dingyi; Zhou, Wenjun; Qiao, Xueguang; Albert, Jacques

    2015-01-01

    In this work, a compact fiber-optic 3D shape sensor consisting of two serially connected 2° tilted fiber Bragg gratings (TFBGs) is proposed, where the orientations of the grating planes of the two TFBGs are orthogonal. The measurement of the reflective transmission spectrum from the pair of TFBGs was implemented by Fresnel reflection of the cleaved fiber end. The two groups of cladding mode resonances in the reflection spectrum respond differentially to bending, which allows for the unique determination of the magnitude and orientation of the bend plane (i.e. with a ± 180 degree uncertainty). Bending responses ranging from -0.33 to + 0.21 dB/m(-1) (depending on orientation) are experimentally demonstrated with bending from 0 to 3.03 m(-1). In the third (axial) direction, the strain is obtained directly by the shift of the TFBG Bragg wavelengths with a sensitivity of 1.06 pm/με. PMID:26617191

  18. Compact Optical Fiber 3D Shape Sensor Based on a Pair of Orthogonal Tilted Fiber Bragg Gratings

    PubMed Central

    Feng, Dingyi; Zhou, Wenjun; Qiao, Xueguang; Albert, Jacques

    2015-01-01

    In this work, a compact fiber-optic 3D shape sensor consisting of two serially connected 2° tilted fiber Bragg gratings (TFBGs) is proposed, where the orientations of the grating planes of the two TFBGs are orthogonal. The measurement of the reflective transmission spectrum from the pair of TFBGs was implemented by Fresnel reflection of the cleaved fiber end. The two groups of cladding mode resonances in the reflection spectrum respond differentially to bending, which allows for the unique determination of the magnitude and orientation of the bend plane (i.e. with a ± 180 degree uncertainty). Bending responses ranging from −0.33 to + 0.21 dB/m−1 (depending on orientation) are experimentally demonstrated with bending from 0 to 3.03 m−1. In the third (axial) direction, the strain is obtained directly by the shift of the TFBG Bragg wavelengths with a sensitivity of 1.06 pm/με. PMID:26617191

  19. SHAPES - Spatial, High-Accuracy, Position-Encoding Sensor for multi-point, 3-D position measurement of large flexible structures

    NASA Technical Reports Server (NTRS)

    Nerheim, N. M

    1987-01-01

    An electro-optical position sensor for precise simultaneous measurement of the 3-D positions of multiple points on large space structures is described. The sensor data rate is sufficient for most control purposes. Range is determined by time-of-flight correlation of short laser pulses returned from retroreflector targets using a streak tube/CCD detector. Angular position is determined from target image locations on a second CCD. Experimental verification of dynamic ranging to multiple targets is discussed.

  20. 3D Radiative Transfer Effects in Multi-Angle/Multi-Spectral Radio-Polarimetric Signals from a Mixture of Clouds and Aerosols Viewed by a Non-Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-01-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal--not noise--for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  1. Navigation Doppler Lidar Sensor for Precision Altitude and Vector Velocity Measurements Flight Test Results

    NASA Technical Reports Server (NTRS)

    Pierrottet, Diego F.; Lockhard, George; Amzajerdian, Farzin; Petway, Larry B.; Barnes, Bruce; Hines, Glenn D.

    2011-01-01

    An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over vegetation free terrain. The sensor was one of several sensors tested in this field test by NASA?s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.

  2. On non-invasive 2D and 3D Chromatic White Light image sensors for age determination of latent fingerprints.

    PubMed

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-10-10

    The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5 h] and [5, 24 h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme. PMID:22658793

  3. On non-invasive 2D and 3D Chromatic White Light image sensors for age determination of latent fingerprints.

    PubMed

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-10-10

    The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5 h] and [5, 24 h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme.

  4. A fluorescence LIDAR sensor for hyper-spectral time-resolved remote sensing and mapping.

    PubMed

    Palombi, Lorenzo; Alderighi, Daniele; Cecchi, Giovanna; Raimondi, Valentina; Toci, Guido; Lognoli, David

    2013-06-17

    In this work we present a LIDAR sensor devised for the acquisition of time resolved laser induced fluorescence spectra. The gating time for the acquisition of the fluorescence spectra can be sequentially delayed in order to achieve fluorescence data that are resolved both in the spectral and temporal domains. The sensor can provide sub-nanometric spectral resolution and nanosecond time resolution. The sensor has also imaging capabilities by means of a computer-controlled motorized steering mirror featuring a biaxial angular scanning with 200 μradiant angular resolution. The measurement can be repeated for each point of a geometric grid in order to collect a hyper-spectral time-resolved map of an extended target.

  5. A fluorescence LIDAR sensor for hyper-spectral time-resolved remote sensing and mapping.

    PubMed

    Palombi, Lorenzo; Alderighi, Daniele; Cecchi, Giovanna; Raimondi, Valentina; Toci, Guido; Lognoli, David

    2013-06-17

    In this work we present a LIDAR sensor devised for the acquisition of time resolved laser induced fluorescence spectra. The gating time for the acquisition of the fluorescence spectra can be sequentially delayed in order to achieve fluorescence data that are resolved both in the spectral and temporal domains. The sensor can provide sub-nanometric spectral resolution and nanosecond time resolution. The sensor has also imaging capabilities by means of a computer-controlled motorized steering mirror featuring a biaxial angular scanning with 200 μradiant angular resolution. The measurement can be repeated for each point of a geometric grid in order to collect a hyper-spectral time-resolved map of an extended target. PMID:23787661

  6. Doppler Lidar Sensor for Precision Landing on the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry; Hines, Glenn; Barnes, Bruce; Pierrottet, Diego; Lockhard, George

    2012-01-01

    Landing mission concepts that are being developed for exploration of planetary bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe soft landing at the pre-designated sites. To address this need, a Doppler lidar is being developed by NASA under the Autonomous Landing and Hazard Avoidance (ALHAT) project. This lidar sensor is a versatile instrument capable of providing precision velocity vectors, vehicle ground relative altitude, and attitude. The capabilities of this advanced technology have been demonstrated through two helicopter flight test campaigns conducted over a vegetation-free terrain in 2008 and 2010. Presently, a prototype version of this sensor is being assembled for integration into a rocket-powered terrestrial free-flyer vehicle. Operating in a closed loop with vehicle's guidance and navigation system, the viability of this advanced sensor for future landing missions will be demonstrated through a series of flight tests in 2012.

  7. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  8. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  9. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2004-12-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  10. A Distributed Fiber Optic Sensor Network for Online 3-D Temperature and Neutron Fluence Mapping in a VHTR Environment

    SciTech Connect

    Tsvetkov, Pavel; Dickerson, Bryan; French, Joseph; McEachern, Donald; Ougouag, Abderrafi

    2014-04-30

    Robust sensing technologies allowing for 3D in-core performance monitoring in real time are of paramount importance for already established LWRs to enhance their reliability and availability per year, and therefore, to further facilitate their economic competitiveness via predictive assessment of the in-core conditions.

  11. A heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in 3D space

    NASA Astrophysics Data System (ADS)

    Lin, Hong; Tanner, Steve; Rushing, John; Graves, Sara; Criswell, Evans

    2008-03-01

    Large scale sensor networks composed of many low-cost small sensors networked together with a small number of high fidelity position sensors can provide a robust, fast and accurate air defense and warning system. The team has been developing simulations of such large networks, and is now adding terrain data in an effort to provide more realistic analysis of the approach. This work, a heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in a three-dimensional environment is presented. The sensor network can be composed of large numbers of low fidelity binary and bearing-only sensors, and small numbers of high fidelity position sensors, such as radars. The binary and bearing-only sensors are randomly distributed over a large geographic region; while the position sensors are distributed evenly. The elevations of the sensors are determined through the use of DTED Level 0 dataset. The targets are located through fusing measurement information from all types of sensors modeled by the simulation. The network simulation utilizes the same search-based optimization algorithm as in our previous two-dimensional sensor network simulation with some significant modifications. The fusion algorithm is parallelized using spatial decomposition approach: the entire surveillance area is divided into small regions and each region is assigned to one compute node. Each node processes sensor measurements and terrain data only for the assigned sub region. A master process combines the information from all the compute nodes to get the overall network state. The simulation results have indicated that the distributed fusion algorithm is efficient enough so that an optimal solution can be reached before the arrival of the next sensor data with a reasonable time interval, and real-time target detection can be achieved. The simulation was performed on a Linux cluster with communication between nodes facilitated by the Message Passing Interface

  12. Simulation of 3-D electromagnetic fields near capacitance sensors. CRADA final report for CRADA Number Y-1294-0306

    SciTech Connect

    Gray, L.J.; Morris, M.D.; Semeraro, B.D.; Cooper, E.

    1996-09-30

    Computer Application Systems, Inc. is currently developing a capciflector sensor for a variety of commercial applications, e.g., object detection in robotics. The goal of this project was to create computational tools for simulating the performance of this device. The role of modeling is to provide a quantitative understanding of how the sensor works, and to assist in designing optimal sensor configurations for specific applications. A two-dimensional boundary integral code for determining the electric field was constructed, and a novel algorithm for solving the inverse design problem was investigated. Parallel implementation of the code, which will be required for detailed three-dimensional analysis, was also investigated.

  13. Automatic Construction of 3D Basic-Semantic Models of Inhabited Interiors Using Laser Scanners and RFID Sensors

    PubMed Central

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos

    2012-01-01

    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results. PMID:22778609

  14. Tropospheric Airborne Meteorological Data Reporting (TAMDAR) Sensor Validation and Verification on National Oceanographic and Atmospheric Administration (NOAA) Lockheed WP-3D Aircraft

    NASA Technical Reports Server (NTRS)

    Tsoucalas, George; Daniels, Taumi S.; Zysko, Jan; Anderson, Mark V.; Mulally, Daniel J.

    2010-01-01

    As part of the National Aeronautics and Space Administration's Aviation Safety and Security Program, the Tropospheric Airborne Meteorological Data Reporting project (TAMDAR) developed a low-cost sensor for aircraft flying in the lower troposphere. This activity was a joint effort with support from Federal Aviation Administration, National Oceanic and Atmospheric Administration, and industry. This paper reports the TAMDAR sensor performance validation and verification, as flown on board NOAA Lockheed WP-3D aircraft. These flight tests were conducted to assess the performance of the TAMDAR sensor for measurements of temperature, relative humidity, and wind parameters. The ultimate goal was to develop a small low-cost sensor, collect useful meteorological data, downlink the data in near real time, and use the data to improve weather forecasts. The envisioned system will initially be used on regional and package carrier aircraft. The ultimate users of the data are National Centers for Environmental Prediction forecast modelers. Other users include air traffic controllers, flight service stations, and airline weather centers. NASA worked with an industry partner to develop the sensor. Prototype sensors were subjected to numerous tests in ground and flight facilities. As a result of these earlier tests, many design improvements were made to the sensor. The results of tests on a final version of the sensor are the subject of this report. The sensor is capable of measuring temperature, relative humidity, pressure, and icing. It can compute pressure altitude, indicated air speed, true air speed, ice presence, wind speed and direction, and eddy dissipation rate. Summary results from the flight test are presented along with corroborative data from aircraft instruments.

  15. 3D Geologic and Reservoir Modelling of a Distributive Fluvial System Derived from lidar: A Case Study of the Huesca Fluvial Fan.

    NASA Astrophysics Data System (ADS)

    Burnham, Brian; Hodgetts, David; Redfern, Jonathan

    2014-05-01

    Understanding stratigraphic and depositional architecture in a fluvially dominated system is fundamental when trying to model and characterise properties such as geometric relationships, heterogeneity, lithologic patterns or trends of the system as well as any associated petrophysical properties or behaviours. The Huesca fluvial fan, an Oligocene - Miocene age Distributive Fluvial System (DFS) in the northern extent of the Ebro Basin, is used extensively as an outcrop analogue for modelling fluvial hydrocarbon reservoirs, as well as a base for the DFS model. To further improve understanding of the system, mapping techniques using lidar integrated with Differential Global Navigation Satellite System (DGNSS) measurements were used to create sub-metre (spatially) accurate geologic models of the medial-distal portions of the DFS. In addition to the digital terrain data, traditional field sedimentary logs, structural and palaeocurrent measurements, and samples for petrophysical analysis were also collected near the town of Piracés in a series of amphitheatres and canal cuts that expose excellent two and three-dimensional views of the strata. The geologic models and subsequent analyses derived from the data will provide a quantitative tool to further understand the depositional architecture, geometric relationship and lithologic characteristics across the studied portion of the distributive fluvial system. Utilizing the inherent quantitative nature of the terrain data in combination with the traditional field and sample data collected, an outcrop based geocellular model of the studied section can be constructed by using several geostatistical modelling approaches to describe geo-body geometries (thickness and width ratio) for the associated fluvial architecture, as well as facies distribution and observed petrophysical characteristics. The resolution of the digital terrain data (<10cm) allowed for an accurate integration of the field observations (palaeoflow

  16. Lidar Sensor Performance in Closed-Loop Flight Testing of the Morpheus Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, V. Eric; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Bulyshev, Alexander E.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.

    2015-01-01

    For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging Flash Lidar is a second generation, compact, real-time, aircooled instrument developed from a number of components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The Flash Lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision (1-s). The Flash Lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Navigation Doppler Lidar (NDL) system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The NDLâ€"TM"s measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter (LA), also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the Flash Lidar, can provide range along a separate vector. The LA measurements are also fed

  17. Neutron measurements with ultra-thin 3D silicon sensors in a radiotherapy treatment room using a Siemens PRIMUS linac

    NASA Astrophysics Data System (ADS)

    Guardiola, C.; Gómez, F.; Fleta, C.; Rodríguez, J.; Quirion, D.; Pellegrini, G.; Lousa, A.; Martínez-de-Olcoz, L.; Pombar, M.; Lozano, M.

    2013-05-01

    The accurate detection and dosimetry of neutrons in mixed and pulsed radiation fields is a demanding instrumental issue with great interest both for the industrial and medical communities. In recent studies of neutron contamination around medical linacs, there is a growing concern about the secondary cancer risk for radiotherapy patients undergoing treatment in photon modalities at energies greater than 6 MV. In this work we present a promising alternative to standard detectors with an active method to measure neutrons around a medical linac using a novel ultra-thin silicon detector with 3D electrodes adapted for neutron detection. The active volume of this planar device is only 10 µm thick, allowing a high gamma rejection, which is necessary to discriminate the neutron signal in the radiotherapy peripheral radiation field with a high gamma background. Different tests have been performed in a clinical facility using a Siemens PRIMUS linac at 6 and 15 MV. The results show a good thermal neutron detection efficiency around 2% and a high gamma rejection factor.

  18. Facile synthesis of novel 3D nanoflower-like Cu(x)O/multilayer graphene composites for room temperature NO(x) gas sensor application.

    PubMed

    Yang, Ying; Tian, Chungui; Wang, Jingchao; Sun, Li; Shi, Keying; Zhou, Wei; Fu, Honggang

    2014-07-01

    3D nanoflower-like CuxO/multilayer graphene composites (CuMGCs) have been successfully synthesized as a new type of room temperature NOx gas sensor. Firstly, the expanded graphite (EG) was activated by KOH and many moderate functional groups were generated; secondly, Cu(CH3COO)2 and CTAB underwent full infusion into the interlayers of activated EG (aEG) by means of a vacuum-assisted technique and then reacted with the functional groups of aEG accompanied by the exfoliation of aEG via reflux. Eventually, the 3D nanoflower consisting of 5-9 nm CuxO nanoparticles homogeneously grow in situ on aEG. The KOH activation of EG plays a key role in the uniform formation of CuMGCs. When being used as gas sensors for detection of NOx, the CuMGCs achieved a higher response at room temperature than that of the corresponding CuxO. In detail, the CuMGCs show a higher NOx gas sensing performance with low detection limit of 97 ppb, high gas response of 95.1% and short response time of 9.6 s to 97.0 ppm NOx at room temperature. Meanwhile, the CuMGC sensor presents a favorable linearity, good selectivity and stability. The enhancement of the sensing response is mainly attributed to the improved conductivity of the CuMGCs. A series of Mott-Schottky and EIS measurements demonstrated that the CuMGCs have much higher donor densities than CuxO and can easily capture and migrate electrons from the conduction band, resulting in the enhancement of electrical conductivity.

  19. Tooteko: a Case Study of Augmented Reality for AN Accessible Cultural Heritage. Digitization, 3d Printing and Sensors for AN Audio-Tactile Experience

    NASA Astrophysics Data System (ADS)

    D'Agnano, F.; Balletti, C.; Guerra, F.; Vernier, P.

    2015-02-01

    Tooteko is a smart ring that allows to navigate any 3D surface with your finger tips and get in return an audio content that is relevant in relation to the part of the surface you are touching in that moment. Tooteko can be applied to any tactile surface, object or sheet. However, in a more specific domain, it wants to make traditional art venues accessible to the blind, while providing support to the reading of the work for all through the recovery of the tactile dimension in order to facilitate the experience of contact with art that is not only "under glass." The system is made of three elements: a high-tech ring, a tactile surface tagged with NFC sensors, and an app for tablet or smartphone. The ring detects and reads the NFC tags and, thanks to the Tooteko app, communicates in wireless mode with the smart device. During the tactile navigation of the surface, when the finger reaches a hotspot, the ring identifies the NFC tag and activates, through the app, the audio track that is related to that specific hotspot. Thus a relevant audio content relates to each hotspot. The production process of the tactile surfaces involves scanning, digitization of data and 3D printing. The first experiment was modelled on the facade of the church of San Michele in Isola, made by Mauro Codussi in the late fifteenth century, and which marks the beginning of the Renaissance in Venice. Due to the absence of recent documentation on the church, the Correr Museum asked the Laboratorio di Fotogrammetria to provide it with the aim of setting up an exhibition about the order of the Camaldolesi, owners of the San Michele island and church. The Laboratorio has made the survey of the facade through laser scanning and UAV photogrammetry. The point clouds were the starting point for prototypation and 3D printing on different supports. The idea of the integration between a 3D printed tactile surface and sensors was born as a final thesis project at the Postgraduate Mastercourse in Digital

  20. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  1. System performance and modeling of a bioaerosol detection lidar sensor utilizing polarization diversity

    NASA Astrophysics Data System (ADS)

    Glennon, John J.; Nichols, Terry; Gatt, Phillip; Baynard, Tahllee; Marquardt, John H.; Vanderbeek, Richard G.

    2009-05-01

    The weaponization and dissemination of biological warfare agents (BWA) constitute a high threat to civilians and military personnel. An aerosol release, disseminated from a single point, can directly affect large areas and many people in a short time. Because of this threat real-time standoff detection of BWAs is a key requirement for national and military security. BWAs are a general class of material that can refer to spores, bacteria, toxins, or viruses. These bioaerosols have a tremendous size, shape, and chemical diversity that, at present, are not well characterized [1]. Lockheed Martin Coherent Technologies (LMCT) has developed a standoff lidar sensor with high sensitivity and robust discrimination capabilities with a size and ruggedness that is appropriate for military use. This technology utilizes multiwavelength backscatter polarization diversity to discriminate between biological threats and naturally occurring interferents such as dust, smoke, and pollen. The optical design and hardware selection of the system has been driven by performance modeling leading to an understanding of measured system sensitivity. Here we briefly discuss the challenges of standoff bioaerosol discrimination and the approach used by LMCT to overcome these challenges. We review the radiometric calculations involved in modeling direct-detection of a distributed aerosol target and methods for accurately estimating wavelength dependent plume backscatter coefficients. Key model parameters and their validation are discussed and outlined. Metrics for sensor sensitivity are defined, modeled, and compared directly to data taken at Dugway Proving Ground, UT in 2008. Sensor sensitivity is modeled to predict performance changes between day and night operation and in various challenging environmental conditions.

  2. 3D fiber-based hybrid nanogenerator for energy harvesting and as a self-powered pressure sensor.

    PubMed

    Li, Xiuhan; Lin, Zong-Hong; Cheng, Gang; Wen, Xiaonan; Liu, Ying; Niu, Simiao; Wang, Zhong Lin

    2014-10-28

    In the past years, scientists have shown that development of a power suit is no longer a dream by integrating the piezoelectric nanogenerator (PENG) or triboelectric nanogenerator (TENG) with commercial carbon fiber cloth. However, there is still no design applying those two kinds of NG together to collect the mechanical energy more efficiently. In this paper, we demonstrate a fiber-based hybrid nanogenerator (FBHNG) composed of TENG and PENG to collect the mechanical energy in the environment. The FBHNG is three-dimensional and can harvest the energy from all directions. The TENG is positioned in the core and covered with PENG as a coaxial core/shell structure. The PENG design here not only enhances the collection efficiency of mechanical energy by a single carbon fiber but also generates electric output when the TENG is not working. We also show the potential that the FBHNG can be weaved into a smart cloth to harvest the mechanical energy from human motions and act as a self-powered strain sensor. The instantaneous output power density of TENG and PENG can achieve 42.6 and 10.2 mW/m(2), respectively. And the rectified output of FBHNG has been applied to charge the commercial capacitor and drive light-emitting diodes, which are also designed as a self-powered alert system. PMID:25268317

  3. 3D fiber-based hybrid nanogenerator for energy harvesting and as a self-powered pressure sensor.

    PubMed

    Li, Xiuhan; Lin, Zong-Hong; Cheng, Gang; Wen, Xiaonan; Liu, Ying; Niu, Simiao; Wang, Zhong Lin

    2014-10-28

    In the past years, scientists have shown that development of a power suit is no longer a dream by integrating the piezoelectric nanogenerator (PENG) or triboelectric nanogenerator (TENG) with commercial carbon fiber cloth. However, there is still no design applying those two kinds of NG together to collect the mechanical energy more efficiently. In this paper, we demonstrate a fiber-based hybrid nanogenerator (FBHNG) composed of TENG and PENG to collect the mechanical energy in the environment. The FBHNG is three-dimensional and can harvest the energy from all directions. The TENG is positioned in the core and covered with PENG as a coaxial core/shell structure. The PENG design here not only enhances the collection efficiency of mechanical energy by a single carbon fiber but also generates electric output when the TENG is not working. We also show the potential that the FBHNG can be weaved into a smart cloth to harvest the mechanical energy from human motions and act as a self-powered strain sensor. The instantaneous output power density of TENG and PENG can achieve 42.6 and 10.2 mW/m(2), respectively. And the rectified output of FBHNG has been applied to charge the commercial capacitor and drive light-emitting diodes, which are also designed as a self-powered alert system.

  4. 3D integration approaches for MEMS and CMOS sensors based on a Cu through-silicon-via technology and wafer level bonding

    NASA Astrophysics Data System (ADS)

    Hofmann, L.; Dempwolf, S.; Reuter, D.; Ecke, R.; Gottfried, K.; Schulz, S. E.; Knechtel, R.; Geßner, T.

    2015-05-01

    Technologies for the 3D integration are described within this paper with respect to devices that have to retain a specific minimum wafer thickness for handling purposes (CMOS) and integrity of mechanical elements (MEMS). This implies Through-Silicon Vias (TSVs) with large dimensions and high aspect ratios (HAR). Moreover, as a main objective, the aspired TSV technology had to be universal and scalable with the designated utilization in a MEMS/CMOS foundry. Two TSV approaches are investigated and discussed, in which the TSVs were fabricated either before or after wafer thinning. One distinctive feature is an incomplete TSV Cu-filling, which avoids long processing and complex process control, while minimizing the thermomechanical stress between Cu and Si and related adverse effects in the device. However, the incomplete filling also includes various challenges regarding process integration. A method based on pattern plating is described, in which TSVs are metalized at the same time as the redistribution layer and which eliminates the need for additional planarization and patterning steps. For MEMS, the realization of a protective hermetically sealed capping is crucial, which is addressed in this paper by glass frit wafer level bonding and is discussed for hermetic sealing of MEMS inertial sensors. The TSV based 3D integration technologies are demonstrated on CMOS like test vehicle and on a MEMS device fabricated in Air Gap Insulated Microstructure (AIM) technology.

  5. Compact high-speed scanning lidar system

    NASA Astrophysics Data System (ADS)

    Dickinson, Cameron; Hussein, Marwan; Tripp, Jeff; Nimelman, Manny; Koujelev, Alexander

    2012-06-01

    The compact High Speed Scanning Lidar (HSSL) was designed to meet the requirements for a rover GN&C sensor. The eye-safe HSSL's fast scanning speed, low volume and low power, make it the ideal choice for a variety of real-time and non-real-time applications including: 3D Mapping; Vehicle guidance and Navigation; Obstacle Detection; Orbiter Rendezvous; Spacecraft Landing / Hazard Avoidance. The HSSL comprises two main hardware units: Sensor Head and Control Unit. In a rover application, the Sensor Head mounts on the top of the rover while the Control Unit can be mounted on the rover deck or within its avionics bay. An Operator Computer is used to command the lidar and immediately display the acquired scan data. The innovative lidar design concept was a result of an extensive trade study conducted during the initial phase of an exploration rover program. The lidar utilizes an innovative scanner coupled with a compact fiber laser and high-speed timing electronics. Compared to existing compact lidar systems, distinguishing features of the HSSL include its high accuracy, high resolution, high refresh rate and large field of view. Other benefits of this design include the capability to quickly configure scan settings to fit various operational modes.

  6. Tracking Efficiency And Charge Sharing of 3D Silicon Sensors at Different Angles in a 1.4T Magnetic Field

    SciTech Connect

    Gjersdal, H.; Bolle, E.; Borri, M.; Da Via, C.; Dorholt, O.; Fazio, S.; Grenier, P.; Grinstein, S. Hansson, P.; Hasi, J.; Hugging, F.; Jackson, P.; Kenney, C.; Kocian, M.; La Rosa, A.; Mastroberardino, A.; Nordahl, P.; Rivero, F.; Rohne, O.; Sandaker, H.; Sjobaek, K.; /Oslo U. /Prague, Tech. U. /SLAC /Bonn U. /SUNY, Stony Brook /Bonn U. /SLAC

    2012-05-07

    A 3D silicon sensor fabricated at Stanford with electrodes penetrating throughout the entire silicon wafer and with active edges was tested in a 1.4 T magnetic field with a 180 GeV/c pion beam at the CERN SPS in May 2009. The device under test was bump-bonded to the ATLAS pixel FE-I3 readout electronics chip. Three readout electrodes were used to cover the 400 {micro}m long pixel side, this resulting in a p-n inter-electrode distance of {approx} 71 {micro}m. Its behavior was confronted with a planar sensor of the type presently installed in the ATLAS inner tracker. Time over threshold, charge sharing and tracking efficiency data were collected at zero and 15{sup o} angles with and without magnetic field. The latest is the angular configuration expected for the modules of the Insertable B-Layer (IBL) currently under study for the LHC phase 1 upgrade expected in 2014.

  7. Turbulent CO2 Flux Measurements by Lidar: Length Scales, Results and Comparison with In-Situ Sensors

    NASA Technical Reports Server (NTRS)

    Gilbert, Fabien; Koch, Grady J.; Beyon, Jeffrey Y.; Hilton, Timothy W.; Davis, Kenneth J.; Andrews, Arlyn; Ismail, Syed; Singh, Upendra N.

    2009-01-01

    The vertical CO2 flux in the atmospheric boundary layer (ABL) is investigated with a Doppler differential absorption lidar (DIAL). The instrument was operated next to the WLEF instrumented tall tower in Park Falls, Wisconsin during three days and nights in June 2007. Profiles of turbulent CO2 mixing ratio and vertical velocity fluctuations are measured by in-situ sensors and Doppler DIAL. Time and space scales of turbulence are precisely defined in the ABL. The eddy-covariance method is applied to calculate turbulent CO2 flux both by lidar and in-situ sensors. We show preliminary mean lidar CO2 flux measurements in the ABL with a time and space resolution of 6 h and 1500 m respectively. The flux instrumental errors decrease linearly with the standard deviation of the CO2 data, as expected. Although turbulent fluctuations of CO2 are negligible with respect to the mean (0.1 %), we show that the eddy-covariance method can provide 2-h, 150-m range resolved CO2 flux estimates as long as the CO2 mixing ratio instrumental error is no greater than 10 ppm and the vertical velocity error is lower than the natural fluctuations over a time resolution of 10 s.

  8. Determination of the spatial TDR-sensor characteristics in strong dispersive subsoil using 3D-FEM frequency domain simulations in combination with microwave dielectric spectroscopy

    NASA Astrophysics Data System (ADS)

    Wagner, Norman; Trinks, Eberhard; Kupfer, Klaus

    2007-04-01

    The spatial sensor characteristics of a 6 cm TDR flat band cable sensor section was simulated with finite element modelling (high frequency structure simulator—HFSS) under certain conditions: (i) in direct contact with the surrounding material (air, water of different salinities, different synthetic and natural soils (sand-silt-clay mixtures)), (ii) with consideration of a defined gap of different size filled with air or water and (iii) the cable sensor pressed at a borehole-wall. The complex dielectric permittivity ɛsstarf(ω, τi) or complex electrical conductivity σsstarf(ω, τi) = iωɛsstarf(ω, τi) of the investigated saturated and unsaturated soils was examined in the frequency range 50 MHz-20 GHz at room temperature and atmospheric pressure with a HP8720D-network analyser. Three soil-specific relaxation processes are assumed to act in the investigated frequency-temperature-pressure range: one primary α-process (main water relaxation) and two secondary (α', β)-processes due to clay-water-ion interactions (bound water relaxation and the Maxwell-Wagner effect). The dielectric relaxation behaviour of every process is described with the use of a simple fractional relaxation model. 3D finite element simulation is performed with a λ/3 based adaptive mesh refinement at a solution frequency of 1 MHz, 10 MHz, 0.1 GHz, 1 GHz and 12.5 GHz. The electromagnetic field distribution, S-parameter and step responses were examined. The simulation adequately reproduces the spatial and temporal electrical and magnetic field distribution. High-lossy soils cause, as a function of increasing gravimetric water content and bulk density, an increase in TDR signal rise time as well as a strong absorption of multiple reflections. An air or water gap works as a quasi-waveguide, i.e. the influence of the surrounding medium is strongly reduced. Appropriate TDR-travel-time distortions can be quantified.

  9. Modeling Diurnal and Seasonal 3D Light Profiles in Amazon Forests

    NASA Astrophysics Data System (ADS)

    Morton, D. C.; Rubio, J.; Gastellu-Etchegorry, J.; Cook, B. D.; Hunter, M. O.; Yin, T.; Nagol, J. R.; Keller, M. M.

    2013-12-01

    The complex horizontal and vertical structure in tropical forests generates a diversity of light environments for canopy and understory trees. These 3D light profiles are dynamic on diurnal and seasonal time scales based on changes in solar illumination and the fraction of diffuse light. Understanding this variability is critical for improving ecosystem models and interpreting optical and LiDAR remote sensing data from tropical forests. Here, we initialized the Discrete Anisotropic Radiative Transfer (DART) model using dense airborne LiDAR data (>20 returns m2) from three forest sites in the central and eastern Amazon. Forest scenes derived from airborne LiDAR data were tested using modeled and observed large-footprint LiDAR data from the ICESat-GLAS sensor. Next, diurnal and seasonal profiles of photosynthetically active radiation (PAR) for each forest site were simulated under clear sky and cloudy conditions using DART. Incident PAR was summarized for canopy, understory, and ground levels. Our study illustrates the importance of realistic canopy models for accurate representation of LiDAR and optical radiative transfer. In particular, canopy rugosity and ground topography information from airborne LiDAR data provided critical 3D information that cannot be recreated using stem maps and allometric relationships for crown dimensions. The spatial arrangement of canopy trees altered PAR availability, even for dominant individuals, compared to downwelling measurements from nearby eddy flux towers. Pseudo-realistic branch and leaf architecture was also essential for recreating multiple scattering within canopies at near-infrared wavelengths commonly used for LiDAR remote sensing and quantifying PAR attenuation from shading within and between canopies. These findings point to the need for more spatial information on forest structure to improve the representation of light availability in models of tropical forest productivity.

  10. Lidar-equipped uav for building information modelling

    NASA Astrophysics Data System (ADS)

    Roca, D.; Armesto, J.; Lagüela, S.; Díaz-Vilariño, L.

    2014-06-01

    The trend to minimize electronic devices in the last decades accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices, resulting in a strong revolution in the surveying and mapping industries. However, only within the last few years the LIDAR sensor technology has achieved sufficiently reduction in terms of size and weight to be considered for UAV platforms. This paper presents an innovative solution to capture point cloud data from a Lidar-equipped UAV and further perform the 3D modelling of the whole envelope of buildings in BIM format. A mini-UAV platform is used (weigh less than 5 kg and up to 1.5 kg of sensor payload), and data from two different acquisition methodologies is processed and compared with the aim at finding the optimal configuration for the generation of 3D models of buildings for energy studies

  11. Comparison of Riparian Evapotranspiration Estimated Using Raman LIDAR and Water Balance Based Estimates from a Soil Moisture Sensor Network

    NASA Astrophysics Data System (ADS)

    Solis, J. A.; Rajaram, H.; Whittemore, D. O.; Butler, J. J.; Eichinger, W. E.; Reboulet, E. C.

    2013-12-01

    Riparian evapotranspiration (RET) is an important component of basin-wide evapotranspiration (ET), especially in subhumid to semi-arid regions, with significant impacts on water management and conservation. A common method of measuring ET is using the eddy correlation technique. However, since most riparian zones are narrow, eddy correlation techniques are not applicable because of limited fetch distance. Techniques based on surface-subsurface water balance are applicable in these situations, but their accuracy is not well constrained. In this study, we estimated RET within a 100 meter long and 40 meter wide riparian zone along Rock Creek in the Whitewater Basin in central Kansas using a water balance approach and Raman LIDAR measurements. A total of six soil moisture profiles (with six soil moisture sensors in each profile) and water-table measurements were used to estimate subsurface water storage (total soil moisture, TSM). The Los Alamos National Laboratory (LANL)-University of Iowa (UI) Raman LIDAR was used to measure the water vapor concentrations in three dimensions where the Monin-Obukhov similarity theory was used to obtain the spatially resolved fluxes. The LIDAR system included a 1.064 micron Nd:YAG laser with a Cassagrain telescope with a laser pulse of 50Hz with 25mJ of energy per pulse. Estimates of RET obtained from TSM changes were compared to LIDAR estimates obtained from three-dimensional water vapor concentrations of the atmosphere directly above and downwind of the riparian vegetation. The LIDAR measurements help to validate the TSM based estimates of RET and constrain their accuracy. RET estimates obtained from TSM changes in individual soil moisture profiles exhibited a large variability (up to a factor 8). This variability results from the highly heterogeneous soils in the vadose zone (2-3 m thick), where soil moisture (rather than groundwater) is the major source of water for riparian vegetation. Variable vegetation density and species also

  12. High-Fidelity Flash Lidar Model Development

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Pierrottet, Diego F.; Amzajerdian, Farzin

    2014-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technologies (ALHAT) project is currently developing the critical technologies to safely and precisely navigate and land crew, cargo and robotic spacecraft vehicles on and around planetary bodies. One key element of this project is a high-fidelity Flash Lidar sensor that can generate three-dimensional (3-D) images of the planetary surface. These images are processed with hazard detection and avoidance and hazard relative navigation algorithms, and then are subsequently used by the Guidance, Navigation and Control subsystem to generate an optimal navigation solution. A complex, high-fidelity model of the Flash Lidar was developed in order to evaluate the performance of the sensor and its interaction with the interfacing ALHAT components on vehicles with different configurations and under different flight trajectories. The model contains a parameterized, general approach to Flash Lidar detection and reflects physical attributes such as range and electronic noise sources, and laser pulse temporal and spatial profiles. It also provides the realistic interaction of the laser pulse with terrain features that include varying albedo, boulders, craters slopes and shadows. This paper gives a description of the Flash Lidar model and presents results from the Lidar operating under different scenarios.

  13. Lidar Systems for Precision Navigation and Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierrottet, Diego F.; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.

    2011-01-01

    The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision. Currently, NASA is developing novel lidar sensors aimed at needs of future planetary landing missions. These lidar sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain that indicate hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase of a landing vehicle, at about 1 km above the ground, can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative velocity and distance data allowing for precision navigation to the landing site. Our Doppler lidar utilizes three laser beams pointed to different directions to measure line of sight velocities and ranges to the ground from altitudes of over 2 km. Throughout the landing trajectory starting at altitudes of about 20 km, the Laser Altimeter can provide very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehicle navigation system. At altitudes from approximately 15 km to 10 km, either the Laser Altimeter or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters, to perform Terrain relative Navigation thus further reducing the vehicle s relative position error. This paper describes the operational capabilities of each lidar sensor and provides a status of their development. Keywords: Laser Remote Sensing, Laser Radar, Doppler Lidar, Flash Lidar, 3-D Imaging, Laser Altimeter, Precession Landing, Hazard Detection

  14. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  15. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  16. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  17. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  18. A comparison of Doppler lidar wind sensors for Earth-orbit global measurement applications

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.

    1985-01-01

    Now, there are four Doppler lidar configurations which are being promoted for the measurement of tropospheric winds: (1) the coherent CO2 Lidar, operating in the 9 micrometer region using a pulsed, atmospheric pressure CO2 gas discharge laser transmitter, and heterodyne detection; (2) the coherent Neodymium doped YAG or Glass Lidar, operating at 1.06 micrometers, using flashlamp or diode laser optical pumping of the solid state laser medium, and heterodyne detection; (3) the Neodymium doped YAG/Glass Lidar, operating at the doubled frequency (at 530 nm wavelength), again using flashlamp or diode laser pumping of the laser transmitter, and using a high resolution tandem Fabry-Perot filter and direct detection; and (4) the Raman shifted Xenon Chloride Lidar, operating at 350 nm wavelength, using a pulsed, atmospheric pressure XeCl gas discharge laser transmitter at 308 nm, Raman shifted in a high pressure hydrogen cell to 350 nm in order to avoid strong stratospheric ozone absorption, also using a high resolution tandem Fabry-Perot filter and direct detection. Comparisons of these four systems can include many factors and tradeoffs. The major portion of this comparison is devoted to efficiency. Efficiency comparisons are made by estimating the number of transmitted photons required for a single pulse wind velocity estimate of + or - 1 m/s accuracy in the middle troposphere, from an altitude of 800 km, which is assured to be reasonable for a polar orbiting platform.

  19. Flash LIDAR Systems for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.

    2009-01-01

    Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.

  20. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  1. Space-Based Erbium-Doped Fiber Amplifier Transmitters for Coherent, Ranging, 3D-Imaging, Altimetry, Topology, and Carbon Dioxide Lidar and Earth and Planetary Optical Laser Communications

    NASA Astrophysics Data System (ADS)

    Storm, Mark; Engin, Doruk; Mathason, Brian; Utano, Rich; Gupta, Shantanu

    2016-06-01

    This paper describes Fibertek, Inc.'s progress in developing space-qualified Erbium-doped fiber amplifier (EDFA) transmitters for laser communications and ranging/topology, and CO2 integrated path differential absorption (IPDA) lidar. High peak power (1 kW) and 6 W of average power supporting multiple communications formats has been demonstrated with 17% efficiency in a compact 3 kg package. The unit has been tested to Technology Readiness Level (TRL) 6 standards. A 20 W EDFA suitable for CO2 lidar has been demonstrated with ~14% efficiency (electrical to optical [e-o]) and its performance optimized for 1571 nm operation.

  2. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  3. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  4. Study of Droplet Activation in Thin Clouds Using Ground-Based Raman Lidar and Ancillary Remote Sensors

    NASA Astrophysics Data System (ADS)

    Rosoldi, Marco; Madonna, Fabio; Gumà Claramunt, Pilar; Pappalardo, Gelsomina

    2016-06-01

    A methodology for the study of cloud droplet activation based on the measurements performed with ground-based multi-wavelength Raman lidars and ancillary remote sensors collected at CNR-IMAA observatory, Potenza, South Italy, is presented. The study is focused on the observation of thin warm clouds. Thin clouds are often also optically thin: this allows the cloud top detection and the full profiling of cloud layers using ground-based Raman lidar. Moreover, broken clouds are inspected to take advantage of their discontinuous structure in order to study the variability of optical properties and water vapor content in the transition from cloudy regions to cloudless regions close to the cloud boundaries. A statistical study of this variability leads to identify threshold values for the optical properties, enabling the discrimination between clouds and cloudless regions. These values can be used to evaluate and improve parameterizations of droplet activation within numerical models. A statistical study of the co-located Doppler radar moments allows to retrieve droplet size and vertical velocities close to the cloud base. First evidences of a correlation between droplet vertical velocities measured at the cloud base and the aerosol effective radius observed in the cloud-free regions of the broken clouds are found.

  5. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  6. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. Development of PM2.5 density distribution visualization system using ground-level sensor network and Mie lidar

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Akaho, Taiga; Kojiro, Yu; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Yamazaki, Akihiro; Arai, Kohei

    2014-10-01

    Atmospheric particulate matters (PM) are tiny pieces of solid or liquid matter associated with the Earth's atmosphere. They are suspended in the atmosphere as atmospheric aerosol. Recently, density of fine particles PM2.5, diameter of 2.5 micrometers or less, from China is serious environmental issue in East part of Asia. In this study, the authors have developed a PM2.5 density distribution visualization system using ground-level sensor network dataset and Mie lidar dataset. The former dataset is used for visualization of horizontal PM2.5 density distribution and movement analysis, the latter dataset is used for visualization of vertical PM2.5 density distribution and movement analysis.

  8. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  9. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  10. Carbon dioxide Doppler lidar wind sensor on a space station polar platform.

    PubMed

    Petheram, J C; Frohbeiter, G; Rosenberg, A

    1989-03-01

    A study has been performed of the feasibility of accommodating a carbon dioxide Doppler lidar on a Space Station polar platform. Results show that such an instrument could be accommodated on a single 1.5- x 2.25-m optical bench, mounted centrally on the earth facing side of the satellite. The power, weight, and thermal issues appear resolvable. However, the question of servicing the instrument remains open, until more data are available on an isotopic CO(2) laser lifetime.

  11. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  12. Validate and update of 3D urban features using multi-source fusion

    NASA Astrophysics Data System (ADS)

    Arrington, Marcus; Edwards, Dan; Sengers, Arjan

    2012-06-01

    As forecast by the United Nations in May 2007, the population of the world transitioned from a rural to an urban demographic majority with more than half living in urban areas.1 Modern urban environments are complex 3- dimensional (3D) landscapes with 4-dimensional patterns of activity that challenge various traditional 1-dimensional and 2-dimensional sensors to accurately sample these man-made terrains. Depending on geographic location, data resulting from LIDAR, multi-spectral, electro-optical, thermal, ground-based static and mobile sensors may be available with multiple collects along with more traditional 2D GIS features. Reconciling differing data sources over time to correctly portray the dynamic urban landscape raises significant fusion and representational challenges particularly as higher levels of spatial resolution are available and expected by users. This paper presents a framework for integrating the imperfect answers of our differing sensors and data sources into a powerful representation of the complex urban environment. A case study is presented involving the integration of temporally diverse 2D, 2.5D and 3D spatial data sources over Kandahar, Afghanistan. In this case study we present a methodology for validating and augmenting 2D/2.5D urban feature and attribute data with LIDAR to produce validated 3D objects. We demonstrate that nearly 15% of buildings in Kandahar require understanding nearby vegetation before 3-D validation can be successful. We also address urban temporal change detection at the object level. Finally we address issues involved with increased sampling resolution since urban features are rarely simple cubes but in the case of Kandahar involve balconies, TV dishes, rooftop walls, small rooms, and domes among other things.

  13. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  14. Aglite: A 3-wavelength lidar system for Assessment of Agricultural Air Quality, Whole Facility Emission Rates and Fluxes

    NASA Astrophysics Data System (ADS)

    Wojcik, M.; Hatfield, J.; Preuger, J.; Pfeiffer, R.; Moore, K.; Martin, R.

    2010-12-01

    Ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) are increasingly being used to characterize ambient aerosols due to key advantages (i.e., wide area of regard (10 km2), fast response time, high spatial resolution (<10 m) and high sensitivity). Scanning lidar allows for 3D imaging of atmospheric motion and aerosol variability. Energy Dynamics Laboratory at Utah State University, in conjunction with the USDA ARS, has developed and successfully deployed a three wavelength lidar system called Aglite to characterize particles in diverse settings. Generating meaningful particle size distribution, size-segregated mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of mass based and size distribution point sensors and characterization of local meteorology. Based on over five years of field and laboratory experience, we present some successful strategies and lessons learned regarding the use of lidar to characterize and map aerosols from different facilities/operations.

  15. An underwater chaotic lidar sensor based on synchronized blue laser diodes

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Luke K.; Dunn, Kaitlin J.; Bollt, Erik M.; Cochenour, Brandon; Jemison, William D.

    2016-05-01

    We present a novel chaotic lidar system designed for underwater impulse response measurements. The system uses two recently introduced, low-cost, commercially available 462 nm multimode InGaN laser diodes, which are synchronized by a bi-directional optical link. This synchronization results in a noise-like chaotic intensity modulation with over 1 GHz bandwidth and strong modulation depth. An advantage of this approach is its simple transmitter architecture, which uses no electrical signal generator, electro-optic modulator, or optical frequency doubler.

  16. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  17. Airborne lidar intensity calibration and application for land use classification

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wang, Cheng; Luo, She-Zhou; Zuo, Zheng-Li

    2014-11-01

    Airborne Light Detection and Ranging (LiDAR) is an active remote sensing technology which can acquire the topographic information efficiently. It can record the accurate 3D coordinates of the targets and also the signal intensity (the amplitude of backscattered echoes) which represents reflectance characteristics of targets. The intensity data has been used in land use classification, vegetation fractional cover and leaf area index (LAI) estimation. Apart from the reflectance characteristics of the targets, the intensity data can also be influenced by many other factors, such as flying height, incident angle, atmospheric attenuation, laser pulse power and laser beam width. It is therefore necessary to calibrate intensity values before further applications. In this study, we analyze the factors affecting LiDAR intensity based on radar range equation firstly, and then applying the intensity calibration method, which includes the sensor-to-target distance and incident angle, to the laser intensity data over the study area. Finally the raw LiDAR intensity and normalized intensity data are used for land use classification along with LiDAR elevation data respectively. The results show that the classification accuracy from the normalized intensity data is higher than that from raw LiDAR intensity data and also indicate that the calibration of LiDAR intensity data is necessary in the application of land use classification.

  18. Counter-sniper 3D laser radar

    NASA Astrophysics Data System (ADS)

    Shepherd, Orr; LePage, Andrew J.; Wijntjes, Geert J.; Zehnpfennig, Theodore F.; Sackos, John T.; Nellums, Robert O.

    1999-01-01

    Visidyne, Inc., teaming with Sandia National Laboratories, has developed the preliminary design for an innovative scannerless 3-D laser radar capable of acquiring, tracking, and determining the coordinates of small caliber projectiles in flight with sufficient precision, so their origin can be established by back projecting their tracks to their source. The design takes advantage of the relatively large effective cross-section of a bullet at optical wavelengths. Kay to its implementation is the use of efficient, high- power laser diode arrays for illuminators and an imaging laser receiver using a unique CCD imager design, that acquires the information to establish x, y (angle-angle) and range coordinates for each bullet at very high frame rates. The detection process achieves a high degree of discrimination by using the optical signature of the bullet, solar background mitigation, and track detection. Field measurements and computer simulations have been used to provide the basis for a preliminary design of a robust bullet tracker, the Counter Sniper 3-D Laser Radar. Experimental data showing 3-D test imagery acquired by a lidar with architecture similar to that of the proposed Counter Sniper 3-D Lidar are presented. A proposed Phase II development would yield an innovative, compact, and highly efficient bullet-tracking laser radar. Such a device would meet the needs of not only the military, but also federal, state, and local law enforcement organizations.

  19. Improving object detection in 2D images using a 3D world model

    NASA Astrophysics Data System (ADS)

    Viggh, Herbert E. M.; Cho, Peter L.; Armstrong-Crews, Nicholas; Nam, Myra; Shah, Danelle C.; Brown, Geoffrey E.

    2014-05-01

    A mobile robot operating in a netcentric environment can utilize offboard resources on the network to improve its local perception. One such offboard resource is a world model built and maintained by other sensor systems. In this paper we present results from research into improving the performance of Deformable Parts Model object detection algorithms by using an offboard 3D world model. Experiments were run for detecting both people and cars in 2D photographs taken in an urban environment. After generating candidate object detections, a 3D world model built from airborne Light Detection and Ranging (LIDAR) and aerial photographs was used to filter out false alarm using several types of geometric reasoning. Comparison of the baseline detection performance to the performance after false alarm filtering showed a significant decrease in false alarms for a given probability of detection.

  20. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  1. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  2. Flash LIDAR Emulator for HIL Simulation

    NASA Technical Reports Server (NTRS)

    Brewster, Paul F.

    2010-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project is building a system for detecting hazards and automatically landing controlled vehicles safely anywhere on the Moon. The Flash Light Detection And Ranging (LIDAR) sensor is used to create on-the-fly a 3D map of the unknown terrain for hazard detection. As part of the ALHAT project, a hardware-in-the-loop (HIL) simulation testbed was developed to test the data processing, guidance, and navigation algorithms in real-time to prove their feasibility for flight. Replacing the Flash LIDAR camera with an emulator in the testbed provided a cheaper, safer, more feasible way to test the algorithms in a controlled environment. This emulator must have the same hardware interfaces as the LIDAR camera, have the same performance characteristics, and produce images similar in quality to the camera. This presentation describes the issues involved and the techniques used to create a real-time flash LIDAR emulator to support HIL simulation.

  3. Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors

    PubMed Central

    Alhashimi, Anas; Varagnolo, Damiano; Gustafsson, Thomas

    2015-01-01

    We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser’s datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three. PMID:26690445

  4. Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors.

    PubMed

    Alhashimi, Anas; Varagnolo, Damiano; Gustafsson, Thomas

    2015-01-01

    We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser's datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three. PMID:26690445

  5. a Multi-Data Source and Multi-Sensor Approach for the 3d Reconstruction and Visualization of a Complex Archaelogical Site: the Case Study of Tolmo de Minateda

    NASA Astrophysics Data System (ADS)

    Torres-Martínez, J. A.; Seddaiu, M.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; González-Aguilera, D.

    2015-02-01

    The complexity of archaeological sites hinders to get an integral modelling using the actual Geomatic techniques (i.e. aerial, closerange photogrammetry and terrestrial laser scanner) individually, so a multi-sensor approach is proposed as the best solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial dataset must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. Last but not least, safeguarding of tangible archaeological heritage and its associated intangible expressions entails a multi-source data approach in which heterogeneous material (historical documents, drawings, archaeological techniques, habit of living, etc.) should be collected and combined with the resulting hybrid 3D of "Tolmo de Minateda" located models. The proposed multi-data source and multi-sensor approach is applied to the study case of "Tolmo de Minateda" archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. In addition, the own defensive nature of the site (i.e. with the presence of three different defensive walls) together with the considerable stratification of the archaeological site (i.e. with different archaeological surfaces and constructive typologies) require that tangible and intangible archaeological heritage expressions can be integrated with the hybrid 3D models obtained, to analyse, understand and exploit the archaeological site by different experts and heritage stakeholders.

  6. Highly-Sensitive Surface-Enhanced Raman Spectroscopy (SERS)-based Chemical Sensor using 3D Graphene Foam Decorated with Silver Nanoparticles as SERS substrate

    PubMed Central

    Srichan, Chavis; Ekpanyapong, Mongkol; Horprathum, Mati; Eiamchai, Pitak; Nuntawong, Noppadon; Phokharatkul, Ditsayut; Danvirutai, Pobporn; Bohez, Erik; Wisitsoraat, Anurat; Tuantranont, Adisorn

    2016-01-01

    In this work, a novel platform for surface-enhanced Raman spectroscopy (SERS)-based chemical sensors utilizing three-dimensional microporous graphene foam (GF) decorated with silver nanoparticles (AgNPs) is developed and applied for methylene blue (MB) detection. The results demonstrate that silver nanoparticles significantly enhance cascaded amplification of SERS effect on multilayer graphene foam (GF). The enhancement factor of AgNPs/GF sensor is found to be four orders of magnitude larger than that of AgNPs/Si substrate. In addition, the sensitivity of the sensor could be tuned by controlling the size of silver nanoparticles. The highest SERS enhancement factor of ∼5 × 104 is achieved at the optimal nanoparticle size of 50 nm. Moreover, the sensor is capable of detecting MB over broad concentration ranges from 1 nM to 100 μM. Therefore, AgNPs/GF is a highly promising SERS substrate for detection of chemical substances with ultra-low concentrations. PMID:27020705

  7. Highly-Sensitive Surface-Enhanced Raman Spectroscopy (SERS)-based Chemical Sensor using 3D Graphene Foam Decorated with Silver Nanoparticles as SERS substrate

    NASA Astrophysics Data System (ADS)

    Srichan, Chavis; Ekpanyapong, Mongkol; Horprathum, Mati; Eiamchai, Pitak; Nuntawong, Noppadon; Phokharatkul, Ditsayut; Danvirutai, Pobporn; Bohez, Erik; Wisitsoraat, Anurat; Tuantranont, Adisorn

    2016-03-01

    In this work, a novel platform for surface-enhanced Raman spectroscopy (SERS)-based chemical sensors utilizing three-dimensional microporous graphene foam (GF) decorated with silver nanoparticles (AgNPs) is developed and applied for methylene blue (MB) detection. The results demonstrate that silver nanoparticles significantly enhance cascaded amplification of SERS effect on multilayer graphene foam (GF). The enhancement factor of AgNPs/GF sensor is found to be four orders of magnitude larger than that of AgNPs/Si substrate. In addition, the sensitivity of the sensor could be tuned by controlling the size of silver nanoparticles. The highest SERS enhancement factor of ∼5 × 104 is achieved at the optimal nanoparticle size of 50 nm. Moreover, the sensor is capable of detecting MB over broad concentration ranges from 1 nM to 100 μM. Therefore, AgNPs/GF is a highly promising SERS substrate for detection of chemical substances with ultra-low concentrations.

  8. Automatic registration of UAV-borne sequent images and LiDAR data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Chen, Chi

    2015-03-01

    Use of direct geo-referencing data leads to registration failure between sequent images and LiDAR data captured by mini-UAV platforms because of low-cost sensors. This paper therefore proposes a novel automatic registration method for sequent images and LiDAR data captured by mini-UAVs. First, the proposed method extracts building outlines from LiDAR data and images and estimates the exterior orientation parameters (EoPs) of the images with building objects in the LiDAR data coordinate framework based on corresponding corner points derived indirectly by using linear features. Second, the EoPs of the sequent images in the image coordinate framework are recovered using a structure from motion (SfM) technique, and the transformation matrices between the LiDAR coordinate and image coordinate frameworks are calculated using corresponding EoPs, resulting in a coarse registration between the images and the LiDAR data. Finally, 3D points are generated from sequent images by multi-view stereo (MVS) algorithms. Then the EoPs of the sequent images are further refined by registering the LiDAR data and the 3D points using an iterative closest-point (ICP) algorithm with the initial results from coarse registration, resulting in a fine registration between sequent images and LiDAR data. Experiments were performed to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision robust co-registration of sequent images and LiDAR data captured by mini-UAVs.

  9. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  10. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  11. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. Imaging Flash Lidar for Autonomous Safe Landing and Spacecraft Proximity Operation

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Brewster, Paul F.; Hines, Glenn D.; Bulyshev, Alexander E.

    2016-01-01

    3-D Imaging flash lidar is recognized as a primary candidate sensor for safe precision landing on solar system bodies (Moon, Mars, Jupiter and Saturn moons, etc.), and autonomous rendezvous proximity operations and docking/capture necessary for asteroid sample return and redirect missions, spacecraft docking, satellite servicing, and space debris removal. During the final stages of landing, from about 1 km to 500 m above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard fli1ght computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station from several kilometers distance. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16k pixels range images with 7 cm precision, at a 20 Hz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument design and capabilities as demonstrated by the closed-loop flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus). Then a plan for continued advancement of the flash lidar technology will be explained. This proposed plan is aimed at the development of a common sensor that with a modest design adjustment can meet the needs of both landing and proximity operation and docking applications.

  13. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  14. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  15. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  16. Imaging Flash Lidar for Safe Landing on Solar System Bodies and Spacecraft Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Bulyshev, Alexander E.; Brewster, Paul F.; Carrion, William A; Pierrottet, Diego F.; Hines, Glenn D.; Petway, Larry B.; Barnes, Bruce W.; Noe, Anna M.

    2015-01-01

    NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of the landing from about 1 kilometer to 500 meters above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16,000 pixels range images with 7 centimeters precision, at 20 Hertz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications.

  17. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  18. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  19. Reconstruction of 3D tree stem models from low-cost terrestrial laser scanner data

    NASA Astrophysics Data System (ADS)

    Kelbe, Dave; Romanczyk, Paul; van Aardt, Jan; Cawse-Nicholson, Kerry

    2013-05-01

    With the development of increasingly advanced airborne sensing systems, there is a growing need to support sensor system design, modeling, and product-algorithm development with explicit 3D structural ground truth commensurate to the scale of acquisition. Terrestrial laser scanning is one such technique which could provide this structural information. Commercial instrumentation to suit this purpose has existed for some time now, but cost can be a prohibitive barrier for some applications. As such we recently developed a unique laser scanning system from readily-available components, supporting low cost, highly portable, and rapid measurement of below-canopy 3D forest structure. Tools were developed to automatically reconstruct tree stem models as an initial step towards virtual forest scene generation. The objective of this paper is to assess the potential of this hardware/algorithm suite to reconstruct 3D stem information for a single scan of a New England hardwood forest site. Detailed tree stem structure (e.g., taper, sweep, and lean) is recovered for trees of varying diameter, species, and range from the sensor. Absolute stem diameter retrieval accuracy is 12.5%, with a 4.5% overestimation bias likely due to the LiDAR beam divergence.

  20. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  1. Lidar Analyses

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1995-01-01

    A brief description of enhancements made to the NASA MSFC coherent lidar model is provided. Notable improvements are the addition of routines to automatically determine the 3 dB misalignment loss angle and the backscatter value at which the probability of a good estimate (for a maximum likelihood estimator) falls to 50%. The ability to automatically generate energy/aperture parametrization (EAP) plots which include the effects of angular misalignment has been added. These EAP plots make it very easy to see that for any practical system where there is some degree of misalignment then there is an optimum telescope diameter for which the laser pulse energy required to achieve a particular sensitivity is minimized. Increasing the telescope diameter above this will result in a reduction of sensitivity. These parameterizations also clearly show that the alignment tolerances at shorter wavelengths are much stricter than those at longer wavelengths. A brief outline of the NASA MSFC AEOLUS program is given and a summary of the lidar designs considered during the program is presented. A discussion of some of the design trades is performed both in the text and in a conference publication attached as an appendix.

  2. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  3. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  4. Wind Field Measurements With Airborne Doppler Lidar

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.

    1999-01-01

    In collaboration with lidar atmospheric remote sensing groups at NASA Marshall Space Flight Center and National Oceanic and Atmospheric Administration (NOAA) Environmental Technology Laboratory, we have developed and flown the Multi-center Airborne Coherent Atmospheric Wind Sensor (MACAWS) lidar on the NASA DC-8 research aircraft. The scientific motivations for this effort are: to obtain measurements of subgrid scale (i.e. 2-200 km) processes and features which may be used to improve parameterizations in global/regional-scale models; to improve understanding and predictive capabilities on the mesoscale; and to assess the performance of Earth-orbiting Doppler lidar for global tropospheric wind measurements. MACAWS is a scanning Doppler lidar using a pulsed transmitter and coherent detection; the use of the scanner allows 3-D wind fields to be produced from the data. The instrument can also be radiometrically calibrated and used to study aerosol, cloud, and surface scattering characteristics at the lidar wavelength in the thermal infrared. MACAWS was used to study surface winds off the California coast near Point Arena, with an example depicted in the figure below. The northerly flow here is due to the Pacific subtropical high. The coastal topography interacts with the northerly flow in the marine inversion layer, and when the flow passes a cape or point that juts into the winds, structures called "hydraulic expansion fans" are observed. These are marked by strong variation along the vertical and cross-shore directions. The plots below show three horizontal slices at different heights above sea level (ASL). Bottom plots are enlargements of the area marked by dotted boxes above. The terrain contours are in 200-m increments, with the white spots being above 600-m elevation. Additional information is contained in the original.

  5. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  6. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  7. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  8. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  9. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  10. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  11. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  12. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  13. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  14. 3-D capaciflector

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1998-01-01

    A capacitive type proximity sensor having improved range and sensitivity between a surface of arbitrary shape and an intruding object in the vicinity of the surface having one or more outer conductors on the surface which serve as capacitive sensing elements shaped to conform to the underlying surface of a machine. Each sensing element is backed by a reflector driven at the same voltage and in phase with the corresponding capacitive sensing element. Each reflector, in turn, serves to reflect the electric field lines of the capacitive sensing element away from the surface of the machine on which the sensor is mounted so as to enhance the component constituted by the capacitance between the sensing element and an intruding object as a fraction of the total capacitance between the sensing element and ground. Each sensing element and corresponding reflecting element are electrically driven in phase, and the capacitance between the sensing elements individually and the sensed object is determined using circuitry known to the art. The reflector may be shaped to shield the sensor and to shape its field of view, in effect providing an electrostatic lensing effect. Sensors and reflectors may be fabricated using a variety of known techniques such as vapor deposition, sputtering, painting, plating, or deformation of flexible films, to provide conformal coverage of surfaces of arbitrary shape.

  15. 3-D MAPPING TECHNOLOGIES FOR HIGH LEVEL WASTE TANKS

    SciTech Connect

    Marzolf, A.; Folsom, M.

    2010-08-31

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  16. 3D technology for intelligent trackers

    NASA Astrophysics Data System (ADS)

    Lipton, Ronald

    2010-10-01

    At Super-LHC luminosity it is expected that the standard suite of level 1 triggers for CMS will saturate. Information from the tracker will be needed to reduce trigger rates to satisfy the level 1 bandwidth. Tracking trigger modules which correlate information from closely-spaced sensor layers to form an on-detector momentum filter are being developed by several groups. We report on a trigger module design which utilizes three dimensional integrated circuit technology incorporating chips which are connected both to the top and bottom sensor, providing the ability to filter information locally. A demonstration chip, the VICTR, has been submitted to the Chartered/Tezzaron two-tier 3D run coordinated by Fermilab. We report on the 3D design concept, the status of the VICTR chip and associated sensor integration utilizing oxide bonding.

  17. 3D Technology for intelligent trackers

    SciTech Connect

    Lipton, Ronald; /Fermilab

    2010-09-01

    At Super-LHC luminosity it is expected that the standard suite of level 1 triggers for CMS will saturate. Information from the tracker will be needed to reduce trigger rates to satisfy the level 1 bandwidth. Tracking trigger modules which correlate information from closely-spaced sensor layers to form an on-detector momentum filter are being developed by several groups. We report on a trigger module design which utilizes three dimensional integrated circuit technology incorporating chips which are connected both to the top and bottom sensor, providing the ability to filter information locally. A demonstration chip, the VICTR, has been submitted to the Chartered/Tezzaron two-tier 3D run coordinated by Fermilab. We report on the 3D design concept, the status of the VICTR chip and associated sensor integration utilizing oxide bonding.

  18. The Use of a Lidar Forward-Looking Turbulence Sensor for Mixed-Compression Inlet Unstart Avoidance and Gross Weight Reduction on a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Soreide, David; Bogue, Rodney K.; Ehernberger, L. J.; Seidel, Jonathan

    1997-01-01

    Inlet unstart causes a disturbance akin to severe turbulence for a supersonic commercial airplane. Consequently, the current goal for the frequency of unstarts is a few times per fleet lifetime. For a mixed-compression inlet, there is a tradeoff between propulsion system efficiency and unstart margin. As the unstart margin decreases, propulsion system efficiency increases, but so does the unstart rate. This paper intends to first, quantify that tradeoff for the High Speed Civil Transport (HSCT) and second, to examine the benefits of using a sensor to detect turbulence ahead of the airplane. When the presence of turbulence is known with sufficient lead time to allow the propulsion system to adjust the unstart margin, then inlet un,starts can be minimized while overall efficiency is maximized. The NASA Airborne Coherent Lidar for Advanced In-Flight Measurements program is developing a lidar system to serve as a prototype of the forward-looking sensor. This paper reports on the progress of this development program and its application to the prevention of inlet unstart in a mixed-compression supersonic inlet. Quantified benefits include significantly reduced takeoff gross weight (TOGW), which could increase payload, reduce direct operating costs, or increase range for the HSCT.

  19. Measuring and mapping forest wildlife habitat characteristics using LiDAR remote sensing and multi-sensor function

    NASA Astrophysics Data System (ADS)

    Hyde, Peter

    Managing forests for multiple, often competing uses is challenging; managing Sierra National Forest's fire regime and California spotted owl habitat is difficult and compounded by lack of information about habitat quality. Consistent and accurate measurements of forest structure will reduce uncertainties regarding the amount of habitat reduction or alteration that spotted owls can tolerate. Current methods of measuring spotted owl habitat are mostly field-based and emphasize the important of canopy cover. However, this is more because of convenience than because canopy cover is a definitive predictor of owl presence or fecundity. Canopy cover is consistently and accurately measured in the field using a moosehorn densitometer; comparable measurements can be made using airphoto interpretation or from examining satellite imagery, but the results are not consistent. LiDAR remote sensing can produce consistent and accurate measurements of canopy cover, as well as other aspects of forest structure (such as canopy height and biomass) that are known or thought to be at least as predictive as canopy cover. Moreover, LiDAR can be used to produce maps of forest structure rather than the point samples available from field measurements. However, LiDAR data sets are expensive and not available everywhere. Combining LiDAR with other, remote sensing data sets with less expensive, wall-to-wall coverage will result in broader scale maps of forest structure than have heretofore been possible; these maps can then be used to analyze spotted owl habitat. My work consists of three parts: comparison of LiDAR estimates of forest structure with field measurements, statistical fusion of LiDAR and other remote sensing data sets to produce broad scale maps of forest structure, and analysis of California spotted owl presence and fecundity as a function of LiDAR-derived canopy structure. I found that LiDAR was able to replicate field measurements accurately. Additionally, I was able to

  20. 3-D laser radar simulation for autonomous spacecraft landing

    NASA Technical Reports Server (NTRS)

    Reiley, Michael F.; Carmer, Dwayne C.; Pont, W. F.

    1991-01-01

    A sophisticated 3D laser radar sensor simulation, developed and applied to the task of autonomous hazard detection and avoidance, is presented. This simulation includes a backward ray trace to sensor subpixels, incoherent subpixel integration, range dependent noise, sensor point spread function effects, digitization noise, and AM-CW modulation. Specific sensor parameters, spacecraft lander trajectory, and terrain type have been selected to generate simulated sensor data.

  1. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  2. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  3. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  5. Automatic 3d Building Reconstruction from a Dense Image Matching Dataset

    NASA Astrophysics Data System (ADS)

    McClune, Andrew P.; Mills, Jon P.; Miller, Pauline E.; Holland, David A.

    2016-06-01

    Over the last 20 years the demand for three dimensional (3D) building models has resulted in a vast amount of research being conducted in attempts to automate the extraction and reconstruction of models from airborne sensors. Recent results have shown that current methods tend to favour planar fitting procedures from lidar data, which are able to successfully reconstruct simple roof structures automatically but fail to reconstruct more complex structures or roofs with small artefacts. Current methods have also not fully explored the potential of recent developments in digital photogrammetry. Large format digital aerial cameras can now capture imagery with increased overlap and a higher spatial resolution, increasing the number of pixel correspondences between images. Every pixel in each stereo pair can also now be matched using per-pixel algorithms, which has given rise to the approach known as dense image matching. This paper presents an approach to 3D building reconstruction to try and overcome some of the limitations of planar fitting procedures. Roof vertices, extracted from true-orthophotos using edge detection, are refined and converted to roof corner points. By determining the connection between extracted corner points, a roof plane can be defined as a closed-cycle of points. Presented results demonstrate the potential of this method for the reconstruction of complex 3D building models at CityGML LoD2 specification.

  6. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  7. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  8. Diode laser lidar wind velocity sensor using a liquid-crystal retarder for non-mechanical beam-steering.

    PubMed

    Rodrigo, Peter John; Iversen, Theis F Q; Hu, Qi; Pedersen, Christian

    2014-11-01

    We extend the functionality of a low-cost CW diode laser coherent lidar from radial wind speed (scalar) sensing to wind velocity (vector) measurements. Both speed and horizontal direction of the wind at ~80 m remote distance are derived from two successive radial speed estimates by alternately steering the lidar probe beam in two different lines-of-sight (LOS) with a 60° angular separation. Dual-LOS beam-steering is implemented optically with no moving parts by means of a controllable liquid-crystal retarder (LCR). The LCR switches the polarization between two orthogonal linear states of the lidar beam so it either transmits through or reflects off a polarization splitter. The room-temperature switching time between the two LOS is measured to be in the order of 100 μs in one switch direction but 16 ms in the opposite transition. Radial wind speed measurement (at 33 Hz rate) while the lidar beam is repeatedly steered from one LOS to the other every half a second is experimentally demonstrated - resulting in 1 Hz rate estimates of wind velocity magnitude and direction at better than 0.1 m/s and 1° resolution, respectively.

  9. Diode laser lidar wind velocity sensor using a liquid-crystal retarder for non-mechanical beam-steering.

    PubMed

    Rodrigo, Peter John; Iversen, Theis F Q; Hu, Qi; Pedersen, Christian

    2014-11-01

    We extend the functionality of a low-cost CW diode laser coherent lidar from radial wind speed (scalar) sensing to wind velocity (vector) measurements. Both speed and horizontal direction of the wind at ~80 m remote distance are derived from two successive radial speed estimates by alternately steering the lidar probe beam in two different lines-of-sight (LOS) with a 60° angular separation. Dual-LOS beam-steering is implemented optically with no moving parts by means of a controllable liquid-crystal retarder (LCR). The LCR switches the polarization between two orthogonal linear states of the lidar beam so it either transmits through or reflects off a polarization splitter. The room-temperature switching time between the two LOS is measured to be in the order of 100 μs in one switch direction but 16 ms in the opposite transition. Radial wind speed measurement (at 33 Hz rate) while the lidar beam is repeatedly steered from one LOS to the other every half a second is experimentally demonstrated - resulting in 1 Hz rate estimates of wind velocity magnitude and direction at better than 0.1 m/s and 1° resolution, respectively. PMID:25401817

  10. Studying the Impact of the Three Dimensional Canopy Structure on LIDAR Waveforms Evaluated with Field Measurements

    NASA Astrophysics Data System (ADS)

    Xu, L.; Knyazikhin, Y.; Myneni, R. B.; Strahler, A. H.; Schaaf, C.; Antonarakis, A. S.; Moorcroft, P. R.

    2011-12-01

    The three-dimensional structure of a forest - its composition, density, height, crown geometry, within-crown foliage distribution and properties of individual leaves - has a direct impact on the lidar waveform. The pair-correlation function defined as the probability of finding simultaneously phytoelements at two points is the most natural and physically meaningful descriptor of the canopy structure over wide range of scales. The stochastic radiative transfer equations naturally admit this measure and thus provide a powerful means to investigate 3D canopy from space. NASA's Airborne Laser Vegetation Imaging Sensor (LVIS) and ground based data on canopy structure acquired over 5 sites in New England, California and La Selva (Costa Rica) tropical forest were analyzed to assess the impact of 3D canopy structure on lidar waveform and the ability of stochastic radiative transfer equations to simulate the 3D effects. Our results suggest the pair correlation function is sensitive to horizontal and vertical clumping, crown geometry and spatial distribution of trees. Its use in the stochastic radiative transfer equation allows us to accurately simulate the effects of 3D canopy structure on the lidar waveform. Specifically, we found that (1) attenuation of the waveform occurs at a slower rate than 1D models predict; this may result in an underestimation of foliage profile if 3D effects are ignored; (2) 1D model is unable to match simulated waveform and measured surface reflectance, i.e., an unrealistic high value of surface reflectance needs to be used to simulate ground return of sparse vegetation; (3) spatial distribution of trees has a strong impact on the lidar waveform. Simple analytical models of the pair-correlation function will also be discussed.

  11. In situ correlative measurements for the ultraviolet differential absorption lidar and the high spectral resolution lidar air quality remote sensors: 1980 PEPE/NEROS program

    NASA Technical Reports Server (NTRS)

    Gregory, G. L.; Beck, S. M.; Mathis, J. J., Jr.

    1981-01-01

    In situ correlative measurements were obtained with a NASA aircraft in support of two NASA airborne remote sensors participating in the Environmental Protection Agency's 1980persistent elevated pollution episode (PEPE) and Northeast regional oxidant study (NEROS) field program in order to provide data for evaluating the capability of two remote sensors for measuring mixing layer height, and ozone and aerosol concentrations in the troposphere during the 1980 PEPE/NEROS program. The in situ aircraft was instrumented to measure temperature, dewpoint temperature, ozone concentrations, and light scattering coefficient. In situ measurements for ten correlative missions are given and discussed. Each data set is presented in graphical and tabular format aircraft flight plans are included.

  12. Oceanic Lidar

    NASA Technical Reports Server (NTRS)

    Carder, K. L. (Editor)

    1981-01-01

    Instrument concepts which measure ocean temperature, chlorophyll, sediment and Gelbstoffe concentrations in three dimensions on a quantitative, quasi-synoptic basis were considered. Coastal zone color scanner chlorophyll imagery, laser stimulated Raman temperaure and fluorescence spectroscopy, existing airborne Lidar and laser fluorosensing instruments, and their accuracies in quantifying concentrations of chlorophyll, suspended sediments and Gelbstoffe are presented. Lidar applications to phytoplankton dynamics and photochemistry, Lidar radiative transfer and signal interpretation, and Lidar technology are discussed.

  13. 3D IC for Future HEP Detectors

    SciTech Connect

    Thom, J.; Lipton, R.; Heintz, U.; Johnson, M.; Narain, M.; Badman, R.; Spiegel, L.; Triphati, M.; Deptuch, G.; Kenney, C.; Parker, S.; Ye, Z.; Siddons, D.

    2014-11-07

    Three dimensional integrated circuit technologies offer the possibility of fabricating large area arrays of sensors integrated with complex electronics with minimal dead area, which makes them ideally suited for applications at the LHC upgraded detectors and other future detectors. Here we describe ongoing R&D efforts to demonstrate functionality of components of such detectors. This also includes the study of integrated 3D electronics with active edge sensors to produce "active tiles" which can be tested and assembled into arrays of arbitrary size with high yield.

  14. Physical sensor difference-based method and virtual sensor difference-based method for visual and quantitative estimation of lower limb 3D gait posture using accelerometers and magnetometers.

    PubMed

    Liu, Kun; Inoue, Yoshio; Shibata, Kyoko

    2012-01-01

    An approach using a physical sensor difference-based algorithm and a virtual sensor difference-based algorithm to visually and quantitatively confirm lower limb posture was proposed. Three accelerometers and two MAG(3)s (inertial sensor module) were used to measure the accelerations and magnetic field data for the calculation of flexion/extension (FE) and abduction/adduction (AA) angles of hip joint and FE, AA and internal/external rotation (IE) angles of knee joint; then, the trajectories of knee and ankle joints were obtained with the joint angles and segment lengths. There was no integration of acceleration or angular velocity for the joint rotations and positions, which is an improvement on the previous method in recent literature. Compared with the camera motion capture system, the correlation coefficients in five trials were above 0.91 and 0.92 for the hip FE and AA, respectively, and higher than 0.94, 0.93 and 0.93 for the knee joint FE, AA and IE, respectively.

  15. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters

  16. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  17. Lidar instruments proposed for Eos

    NASA Technical Reports Server (NTRS)

    Grant, William B.; Browell, Edward V.

    1990-01-01

    Lidar, an acronym for light detection and ranging, represents a class of instruments that utilize lasers to send probe beams into the atmosphere or onto the surface of the Earth and detect the backscattered return in order to measure properties of the atmosphere or surface. The associated technology has matured to the point where two lidar facilities, Geodynamics Laser Ranging System (GLRS), and Laser Atmospheric Wind Sensor (LAWS) were accepted for Phase 2 studies for Eos. A third lidar facility Laser Atmospheric Sounder and Altimeter (LASA), with the lidar experiment EAGLE (Eos Atmospheric Global Lidar Experiment) was proposed for Eos. The generic lidar system has a number of components. They include controlling electronics, laser transmitters, collimating optics, a receiving telescope, spectral filters, detectors, signal chain electronics, and a data system. Lidar systems that measure atmospheric constituents or meteorological parameters record the signal versus time as the beam propagates through the atmosphere. The backscatter arises from molecular (Rayleigh) and aerosol (Mie) scattering, while attenuation arises from molecular and aerosol scattering and absorption. Lidar systems that measure distance to the Earth's surface or retroreflectors in a ranging mode record signals with high temporal resolution over a short time period. The overall characteristics and measurements objectives of the three lidar systems proposed for Eos are given.

  18. Validation of satellite overland retrievals of AOD at northern high latitudes with coincident measurements from airborne sunphotometer, lidar, and in situ sensors during ARCTAS

    NASA Astrophysics Data System (ADS)

    Livingston, J. M.; Shinozuka, Y.; Redemann, J.; Russell, P. B.; Ramachandran, S.; Johnson, R. R.; Clarke, A. D.; Howell, S. G.; McNaughton, C.; Freitag, S.; Kapustin, V. N.; Ferrare, R. A.; Hostetler, C. A.; Hair, J. W.; Torres, O.; Veefkind, P.; Remer, L. A.; Mattoo, S.; Levy, R. C.; Chu, A. D.; Kahn, R. A.; Davis, M. R.

    2009-12-01

    The 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign presented a unique opportunity for validation of satellite retrievals of aerosol optical depth (AOD) over a variety of surfaces at northern high latitudes. In particular, the 14-channel NASA Ames Airborne Tracking Sunphotometer (AATS-14) was operated together with a variety of in-situ and other remote sensors aboard the NASA P-3B research aircraft during both the spring and summer phases of ARCTAS. Among the in-situ sensors were a nephelometer and particle soot absorption photometer (PSAP) operated by University of Hawaii Group for Environmental Aerosol Research (HIGEAR). P-3B science missions included several coincident underflights of the Terra and A-Train satellites during a variety of aerosol loading conditions, including Arctic haze and smoke plumes from boreal forest fires. In this presentation, we will compare AATS-14 AOD spectra, adjusted for the contribution from the layer below the aircraft using the HiGEAR scattering and absorption measurements, with full column AOD retrievals from coincident measurements by satellite sensors such as MISR, MODIS, OMI, and POLDER. We also intend to show comparisons of aerosol extinction derived from AATS-14 measurements during P-3B vertical profiles with coincident measurements from CALIOP aboard the CALIPSO satellite and from the high spectral resolution lidar (HSRL) flown aboard the NASA B-200 aircraft.

  19. STELLOPT Modeling of the 3D Diagnostic Response in ITER

    SciTech Connect

    Lazerson, Samuel A

    2013-05-07

    The ITER three dimensional diagnostic response to an n=3 resonant magnetic perturbation is modeled using the STELLOPT code. The in-vessel coils apply a resonant magnetic perturbation (RMP) fi eld which generates a 4 cm edge displacement from axisymmetry as modeled by the VMEC 3D equilibrium code. Forward modeling of flux loop and magnetic probe response with the DIAGNO code indicates up to 20 % changes in measured plasma signals. Simulated LIDAR measurements of electron temperature indicate 2 cm shifts on the low field side of the plasma. This suggests that the ITER diagnostic will be able to diagnose the 3D structure of the equilibria.

  20. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  1. Lidar Report

    SciTech Connect

    Wollpert.

    2009-04-01

    This report provides an overview of the LiDAR acquisition methodology employed by Woolpert on the 2009 USDA - Savannah River LiDAR Site Project. LiDAR system parameters and flight and equipment information is also included. The LiDAR data acquisition was executed in ten sessions from February 21 through final reflights on March 2, 2009; using two Leica ALS50-II 150kHz Multi-pulse enabled LiDAR Systems. Specific details about the ALS50-II systems are included in Section 4 of this report.

  2. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  3. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  4. Flexible building primitives for 3D building modeling

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Jancosek, M.; Oude Elberink, S.; Vosselman, G.

    2015-03-01

    3D building models, being the main part of a digital city scene, are essential to all applications related to human activities in urban environments. The development of range sensors and Multi-View Stereo (MVS) technology facilitates our ability to automatically reconstruct level of details 2 (LoD2) models of buildings. However, because of the high complexity of building structures, no fully automatic system is currently available for producing building models. In order to simplify the problem, a lot of research focuses only on particular buildings shapes, and relatively simple ones. In this paper, we analyze the property of topology graphs of object surfaces, and find that roof topology graphs have three basic elements: loose nodes, loose edges, and minimum cycles. These elements have interesting physical meanings: a loose node is a building with one roof face; a loose edge is a ridge line between two roof faces whose end points are not defined by a third roof face; and a minimum cycle represents a roof corner of a building. Building primitives, which introduce building shape knowledge, are defined according to these three basic elements. Then all buildings can be represented by combining such building primitives. The building parts are searched according to the predefined building primitives, reconstructed independently, and grouped into a complete building model in a CSG-style. The shape knowledge is inferred via the building primitives and used as constraints to improve the building models, in which all roof parameters are simultaneously adjusted. Experiments show the flexibility of building primitives in both lidar point cloud and stereo point cloud.

  5. Cloud Property Retrieval and 3D Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.

    2003-01-01

    Cloud thickness and photon mean-free-path together determine the scale of "radiative smoothing" of cloud fluxes and radiances. This scale is observed as a change in the spatial spectrum of cloud radiances, and also as the "halo size" seen by off beam lidar such as THOR and WAIL. Such of beam lidar returns are now being used to retrieve cloud layer thickness and vertical scattering extinction profile. We illustrate with recent measurements taken at the Oklahoma ARM site, comparing these to the-dependent 3D simulations. These and other measurements sensitive to 3D transfer in clouds, coupled with Monte Carlo and other 3D transfer methods, are providing a better understanding of the dependence of radiation on cloud inhomogeneity, and to suggest new retrieval algorithms appropriate for inhomogeneous clouds. The international "Intercomparison of 3D Radiation Codes" or I3RC, program is coordinating and evaluating the variety of 3D radiative transfer methods now available, and to make them more widely available. Information is on the Web at: http://i3rc.gsfc.nasa.gov/. Input consists of selected cloud fields derived from data sources such as radar, microwave and satellite, and from models involved in the GEWEX Cloud Systems Studies. Output is selected radiative quantities that characterize the large-scale properties of the fields of radiative fluxes and heating. Several example cloud fields will be used to illustrate. I3RC is currently implementing an "open source" 3d code capable of solving the baseline cases. Maintenance of this effort is one of the goals of a new 3DRT Working Group under the International Radiation Commission. It is hoped that the 3DRT WG will include active participation by land and ocean modelers as well, such as 3D vegetation modelers participating in RAMI.

  6. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  7. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  8. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  11. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  12. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  13. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  14. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  15. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  16. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. Lidar configurations for wind turbine control

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmood; Mann, Jakob

    2016-09-01

    Lidar sensors have proved to be very beneficial in the wind energy industry. They can be used for yaw correction, feed-forward pitch control and load verification. However, the current lidars are expensive. One way to reduce the price is to use lidars with few measurement points. Finding the best configuration of an inexpensive lidar in terms of number of measurement points, the measurement distance and the opening angle is the subject of this study. In order to solve the problem, a lidar model is developed and used to measure wind speed in a turbulence box. The effective wind speed measured by the lidar is compared against the effective wind speed on a wind turbine rotor both theoretically and through simulations. The study provides some results to choose the best configuration of the lidar with few measurement points.

  19. The 3D Elevation Program and America's infrastructure

    USGS Publications Warehouse

    Lukas, Vicki; Carswell, Jr., William J.

    2016-11-07

    Infrastructure—the physical framework of transportation, energy, communications, water supply, and other systems—and construction management—the overall planning, coordination, and control of a project from beginning to end—are critical to the Nation’s prosperity. The American Society of Civil Engineers has warned that, despite the importance of the Nation’s infrastructure, it is in fair to poor condition and needs sizable and urgent investments to maintain and modernize it, and to ensure that it is sustainable and resilient. Three-dimensional (3D) light detection and ranging (lidar) elevation data provide valuable productivity, safety, and cost-saving benefits to infrastructure improvement projects and associated construction management. By providing data to users, the 3D Elevation Program (3DEP) of the U.S. Geological Survey reduces users’ costs and risks and allows them to concentrate on their mission objectives. 3DEP includes (1) data acquisition partnerships that leverage funding, (2) contracts with experienced private mapping firms, (3) technical expertise, lidar data standards, and specifications, and (4) most important, public access to high-quality 3D elevation data. The size and breadth of improvements for the Nation’s infrastructure and construction management needs call for an efficient, systematic approach to acquiring foundational 3D elevation data. The 3DEP approach to national data coverage will yield large cost savings over individual project-by-project acquisitions and will ensure that data are accessible for other critical applications.

  20. Three-Dimensional Air Quality System (3D-AQS)

    NASA Astrophysics Data System (ADS)

    Engel-Cox, J.; Hoff, R.; Weber, S.; Zhang, H.; Prados, A.

    2007-12-01

    The 3-Dimensional Air Quality System (3DAQS) integrates remote sensing observations from a variety of platforms into air quality decision support systems at the U.S. Environmental Protection Agency (EPA), with a focus on particulate air pollution. The decision support systems are the Air Quality System (AQS) / AirQuest database at EPA, Infusing satellite Data into Environmental Applications (IDEA) system, the U.S. Air Quality weblog (Smog Blog) at UMBC, and the Regional East Atmospheric Lidar Mesonet (REALM). The project includes an end user advisory group with representatives from the air quality community providing ongoing feedback. The 3DAQS data sets are UMBC ground based LIDAR, and NASA and NOAA satellite data from MODIS, OMI, AIRS, CALIPSO, MISR, and GASP. Based on end user input, we are co-locating these measurements to the EPA's ground-based air pollution monitors as well as re-gridding to the Community Multiscale Air Quality (CMAQ) model grid. These data provide forecasters and the scientific community with a tool for assessment, analysis, and forecasting of U.S Air Quality. The third dimension and the ability to analyze the vertical transport of particulate pollution are provided by aerosol extinction profiles from the UMBC LIDAR and CALIPSO. We present examples of a 3D visualization tool we are developing to facilitate use of this data. We also present two specific applications of 3D-AQS data. The first is comparisons between PM2.5 monitor data and remote sensing aerosol optical depth (AOD) data, which show moderate agreement but variation with EPA region. The second is a case study for Baltimore, Maryland, as an example of 3D-analysis for a metropolitan area. In that case, some improvement is found in the PM2.5 /LIDAR correlations when using vertical aerosol information to calculate an AOD below the boundary layer.

  1. Real-time 3D vision solution for on-orbit autonomous rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Ruel, S.; English, C.; Anctil, M.; Daly, J.; Smith, C.; Zhu, S.

    2006-05-01

    Neptec has developed a vision system for the capture of non-cooperative objects on orbit. This system uses an active TriDAR sensor and a model based tracking algorithm to provide 6 degree of freedom pose information in real-time from mid range to docking. This system was selected for the Hubble Robotic Vehicle De-orbit Module (HRVDM) mission and for a Detailed Test Objective (DTO) mission to fly on the Space Shuttle. TriDAR (triangulation + LIDAR) technology makes use of a novel approach to 3D sensing by combining triangulation and Time-of-Flight (ToF) active ranging techniques in the same optical path. This approach exploits the complementary nature of these sensing technologies. Real-time tracking of target objects is accomplished using 3D model based tracking algorithms developed at Neptec in partnership with the Canadian Space Agency (CSA). The system provides 6 degrees of freedom pose estimation and incorporates search capabilities to initiate and recover tracking. Pose estimation is performed using an innovative approach that is faster than traditional techniques. This performance allows the algorithms to operate in real-time on the TriDAR's flight certified embedded processor. This paper presents results from simulation and lab testing demonstrating that the system's performance meets the requirements of a complete tracking system for on-orbit autonomous rendezvous and docking.

  2. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  3. Study of Droplet Activation in Thin Clouds Using Ground-based Raman Lidar and Ancillary Remote Sensors

    NASA Astrophysics Data System (ADS)

    Rosoldi, Marco; Madonna, Fabio; Gumà Claramunt, Pilar; Pappalardo, Gelsomina

    2015-04-01

    Studies on global climate change show that the effects of aerosol-cloud interactions (ACI) on the Earth's radiation balance and climate, also known as indirect aerosol effects, are the most uncertain among all the effects involving the atmospheric constituents and processes (Stocker et al., IPCC, 2013). Droplet activation is the most important and challenging process in the understanding of ACI. It represents the direct microphysical link between aerosols and clouds and it is probably the largest source of uncertainty in estimating indirect aerosol effects. An accurate estimation of aerosol-clouds microphysical and optical properties in proximity and within the cloud boundaries represents a good frame for the study of droplet activation. This can be obtained by using ground-based profiling remote sensing techniques. In this work, a methodology for the experimental investigation of droplet activation, based on ground-based multi-wavelength Raman lidar and Doppler radar technique, is presented. The study is focused on the observation of thin liquid water clouds, which are low or midlevel super-cooled clouds characterized by a liquid water path (LWP) lower than about 100 gm-2(Turner et al., 2007). These clouds are often optically thin, which means that ground-based Raman lidar allows the detection of the cloud top and of the cloud structure above. Broken clouds are primarily inspected to take advantage of their discontinuous structure using ground based remote sensing. Observations are performed simultaneously with multi-wavelength Raman lidars, a cloud Doppler radar and a microwave radiometer at CIAO (CNR-IMAA Atmospheric Observatory: www.ciao.imaa.cnr.it), in Potenza, Southern Italy (40.60N, 15.72E, 760 m a.s.l.). A statistical study of the variability of optical properties and humidity in the transition from cloudy regions to cloud-free regions surrounding the clouds leads to the identification of threshold values for the optical properties, enabling the

  4. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  5. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  6. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  7. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  8. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  9. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  10. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  11. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  12. The 3D Elevation Program: summary for Michigan

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features. The Michigan Statewide Authoritative Imagery and Lidar (MiSAIL) program provides statewide lidar coordination with local, State, and national groups in support of 3DEP for Michigan.

  13. Continuous measurements of PM at ground level over an industrial area of Evia (Greece) using synergy of a scanning Lidar system and in situ sensors during TAMEX campaign

    NASA Astrophysics Data System (ADS)

    Georgoussis, G.; Papayannis, A.; Remoudaki, E.; Tsaknakis, G.; Mamouri, R.; Avdikos, G.; Chontidiadis, C.; Kokkalis, P.; Tzezos, M.; Veenstra, M.

    2009-09-01

    During the TAMEX (Tamyneon Air pollution Mini EXperiment) field Campaign, which took place in the industrial site of Aliveri (38o,24'N, 24o 01'E), Evia (Greece) between June 25 and September 25, 2008, continuous measurements of airborne particulate matter (PM) were performed by in situ sensors at ground level. Additional aerosol measurements were performed by a single-wavelength (355 nm) eye-safe scanning lidar, operating in the Range-Height Indicator (RHI) mode between July 22 and 23, 2008. The industrial site of the city of Aliveri is located south-east of the city area at distance of about 2.5 km. The in situ aerosol sampling site was located at the Lykeio area at 62 m above sea level (ASL) and at a distance of 2,8 km from the Public Power Corporation complex area (DEI Corporation) and 3,3 km from a large cement industrial complex owned by Hercules/Lafarge SA Group of Companies (HLGC) and located at Milaki area. According to the European Environment Agency (EEA) report for the year 2004, this industry emits about 302 tons per year of PM10, 967,000 tons of CO2, 16700 tons of SOx and 1410 tons of NOx while the second industrial complex (HLGC) emits about 179 tons per year of PM10, 1890 tons of CO, 1,430,000 tons of CO2, 3510 tons of NOx, 15.4 Kg of cadmium and its compounds, 64.2 kg of mercury and its compounds and 2.2 tons of benzene. The measuring site was equipped with a full meteorological station (Davis Inc., USA), and 3 aerosol samplers: two Dust Track optical sensors from TSI Inc. (USA) and 1 Skypost PM sequential atmospheric particulate matter. The Dust Track sensors monitored the PM10, PM2.5 and PM1.0 concentration levels, with time resolution ranging from 1 to 3 minutes, while a Tecora sensor was taking continuous PM monitoring by the sampling method on 47 mm diameter filter membrane. The analysis of the PM sensors showed that, systematically, during nighttime large quantities of PM2.5 particles were detected (e.g. exceeding 50 ug/m3). During daytime

  14. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  15. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  16. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond