Science.gov

Sample records for 3d lidar sensor

  1. Lidar on small UAV for 3D mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael; Larsson, Hâkan

    2014-10-01

    Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities. Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive (multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of 20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the

  2. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  3. Compact 3D flash lidar video cameras and applications

    NASA Astrophysics Data System (ADS)

    Stettner, Roger

    2010-04-01

    The theory and operation of Advanced Scientific Concepts, Inc.'s (ASC) latest compact 3D Flash LIDAR Video Cameras (3D FLVCs) and a growing number of technical problems and solutions are discussed. The solutions range from space shuttle docking, planetary entry, decent and landing, surveillance, autonomous and manned ground vehicle navigation and 3D imaging through particle obscurants.

  4. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  5. Visualization using 3D voxelization of full lidar waveforms

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Ramnath, Vinod; Feygels, Victor

    2014-11-01

    Airborne bathymetric lidar (Light Detection and Ranging) systems measure photoelectrons on the optical path (range and angle) at the photocathode of a returned laser pulse at high rates, such as every nanosecond. The collected measurement of a single pulse in a time series is called a waveform. Based on the calibration of the lidar system, the return signal is converted into units of received power. This converted value from the lidar waveform data is used to compute an estimate of the reflectance from the returned backscatter, which contains environmental information from along the optical path. This concept led us to develop a novel tool to visualize lidar data in terms of the returned backscatter, and to use this as a data analysis and editing tool. The full lidar waveforms along the optical path, from laser points collected in the region of interest (ROI), are voxelized into a 3D image cube. This allows lidar measurements to be analyzed in three orthogonal directions simultaneously. The laser pulse return (reflection) from the seafloor is visible in the waveform as a pronounced "bump" above the volume backscatter. Floating or submerged objects in the water may also be visible. Similarly, forest canopies and tree branches can be identified in the 3D voxelization. This paper discusses the possibility of using this unique three-orthogonal volume visualizing tool to extract environmental information for carrying out rapid environmental assessments over forests and water.

  6. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  7. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board.

    PubMed

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005

  8. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

    PubMed Central

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005

  9. Georeferenced LiDAR 3D Vine Plantation Map Generation

    PubMed Central

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth®, providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes. PMID:22163952

  10. Georeferenced LiDAR 3D vine plantation map generation.

    PubMed

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth(®), providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes. PMID:22163952

  11. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  12. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  13. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  14. Pedestrian and car detection and classification for unmanned ground vehicle using 3D lidar and monocular camera

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Baeg, Seung-Ho; Lee, Kimin; Lee, Hae Seok; Park, SangDeok

    2011-05-01

    This paper describes an object detection and classification method for an Unmanned Ground Vehicle (UGV) using a range sensor and an image sensor. The range sensor and the image sensor are a 3D Light Detection And Ranging (LIDAR) sensor and a monocular camera, respectively. For safe driving of the UGV, pedestrians and cars should be detected on their moving routes of the vehicle. An object detection and classification techniques based on only a camera has an inherent problem. On the view point of detection with a camera, a certain algorithm should extract features and compare them with full input image data. The input image has a lot of information as object and environment. It is hard to make a decision of the classification. The image should have only one reliable object information to solve the problem. In this paper, we introduce a developed 3D LIDAR sensor and apply a fusion method both 3D LIDAR data and camera data. We describe a 3D LIDAR sensor which is developed by LG Innotek Consortium in Korea, named KIDAR-B25. The 3D LIDAR sensor detects objects, determines the object's Region of Interest (ROI) based on 3D information and sends it into a camera region for classification. In the 3D LIDAR domain, we recognize breakpoints using Kalman filter and then make a cluster using a line segment method to determine an object's ROI. In the image domain, we extract the object's feature data from the ROI region using a Haar-like feature method. Finally it is classified as a pedestrian or car using a trained database with an Adaboost algorithm. To verify our system, we make an experiment on the performance of our system which is mounted on a ground vehicle, through field tests in an urban area.

  15. Automated Reconstruction of Walls from Airborne LIDAR Data for Complete 3d Building Modelling

    NASA Astrophysics Data System (ADS)

    He, Y.; Zhang, C.; Awrangjeb, M.; Fraser, C. S.

    2012-07-01

    Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne Light Detection and Ranging (LIDAR) data with increasing point density and accuracy has been recognized as a valuable source for automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis, and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.

  16. Future trends of 3D silicon sensors

    NASA Astrophysics Data System (ADS)

    Da Vià, Cinzia; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Haughton, Iain; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Christopher; Kok, Angela; Parker, Sherwood; Pellegrini, Giulio; Povoli, Marco; Tzhnevyi, Vladislav; Watts, Stephen J.

    2013-12-01

    Vertex detectors for the next LHC experiments upgrades will need to have low mass while at the same time be radiation hard and with sufficient granularity to fulfil the physics challenges of the next decade. Based on the gained experience with 3D silicon sensors for the ATLAS IBL project and the on-going developments on light materials, interconnectivity and cooling, this paper will discuss possible solutions to these requirements.

  17. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  18. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  19. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  20. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  1. 3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron

    PubMed Central

    Gong, Xiaojin; Lin, Ying; Liu, Jilin

    2013-01-01

    This paper presents a novel way to address the extrinsic calibration problem for a system composed of a 3D LIDAR and a camera. The relative transformation between the two sensors is calibrated via a nonlinear least squares (NLS) problem, which is formulated in terms of the geometric constraints associated with a trihedral object. Precise initial estimates of NLS are obtained by dividing it into two sub-problems that are solved individually. With the precise initializations, the calibration parameters are further refined by iteratively optimizing the NLS problem. The algorithm is validated on both simulated and real data, as well as a 3D reconstruction application. Moreover, since the trihedral target used for calibration can be either orthogonal or not, it is very often present in structured environments, making the calibration convenient. PMID:23377190

  2. Advances in animal ecology from 3D ecosystem mapping with LiDAR

    NASA Astrophysics Data System (ADS)

    Davies, A.; Asner, G. P.

    2015-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Although the use of LiDAR data is widespread in vegetation science, it has only recently (< 14 years) been applied to animal ecology. Despite such recent application, LiDAR has enabled new insights in the field and revealed the fundamental importance of 3D ecosystem structure for animals. We reviewed the studies to date that have used LiDAR in animal ecology, synthesising the insights gained. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential than traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. LiDAR technology can be applied to animal ecology studies in a wide variety of environments to answer an impressive array of questions. Drawing on case studies from vastly different groups, termites and lions, we further demonstrate the applicability of LiDAR and highlight new understanding, ranging from habitat preference to predator-prey interactions, that would not have been possible from studies restricted to field based methods. We conclude with discussion of how future studies will benefit by using LiDAR to consider 3D habitat effects in a wider variety of ecosystems and with more taxa to develop a better understanding of animal dynamics.

  3. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  4. 3D multi-spectrum sensor system with face recognition.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  5. 3D reconstruction of a building from LIDAR data with first-and-last echo information

    NASA Astrophysics Data System (ADS)

    Zhang, Guoning; Zhang, Jixian; Yu, Jie; Yang, Haiquan; Tan, Ming

    2007-11-01

    With the aerial LIDAR technology developing, how to automatically recognize and reconstruct the buildings from LIDAR dataset is an important research topic along with the widespread applications of LIDAR data in city modeling, urban planning, etc.. Applying the information of the first-and-last echo data of the same laser point, in this paper, a scheme of 3D-reconstruction of simple building has been presented, which mainly include the following steps: the recognition of non-boundary building points and boundary building points and the generation of each building-point-cluster; the localization of the boundary of each building; the detection of the planes included in each cluster and the reconstruction of building in 3D form. Through experiment, it can be proved that for the LIDAR data with first-and-last echo information the scheme can effectively and efficiently 3D-reconstruct simple buildings, such as flat and gabled buildings.

  6. Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information

    NASA Astrophysics Data System (ADS)

    Hosoi, F.

    2014-12-01

    Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of

  7. 3D city models completion by fusing lidar and image data

    NASA Astrophysics Data System (ADS)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  8. 3D campus modeling using LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Yoshii, Satoshi; Funatsu, Yukihiro; Takemata, Kazuya

    2012-10-01

    The importance of having a 3D urban city model is recognized in many applications, such as management offices of risk and disaster, the offices for city planning and developing and others. As an example of urban model, we reconstructed 3D KIT campus manually in this study, by utilizing airborne LiDAR point cloud data. The automatic extraction of building shapes was left in future work.

  9. Vegetation Structure and 3-D Reconstruction of Forests Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.

    2009-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the light returns sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves and trunks or larger branches. Instrument deployments in the New England region in 2007 and 2009 and in the southern Sierra Nevada of California in 2008 provided the opportunity to test the ability of the instrument to retrieve tree diameters, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. In New England in 2007, mean parameters retrieved from five scans located within six 1-ha stand sites match manually-measured parameters with values of R2 = 0.94-0.99. Processing the scans to retrieve leaf area index (LAI) provided values within the range of those retrieved with other optical instruments and hemispherical photography. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. Stand heights, obtained from foliage profiles, were not significantly different from RH100 values observed by the Laser Vegetation Imaging Sensor in 2003. Data from the California 2008 and New England 2009 deployments were still being processed at the time of abstract submission. With further hardware and software development, Echidna® technology will provide rapid and accurate measurements of forest canopy structure that can replace manual field measurements, leading to more rapid and more accurate calibration and validation of structure mapping techniques using airborne and spaceborne remote sensors. Three

  10. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  11. 3D sensors and micro-fabricated detector systems

    NASA Astrophysics Data System (ADS)

    Da Vià, Cinzia

    2014-11-01

    Micro-systems based on the Micro Electro Mechanical Systems (MEMS) technology have been used in miniaturized low power and low mass smart structures in medicine, biology and space applications. Recently similar features found their way inside high energy physics with applications in vertex detectors for high-luminosity LHC Upgrades, with 3D sensors, 3D integration and efficient power management using silicon micro-channel cooling. This paper reports on the state of this development.

  12. Optical Sensors and Methods for Underwater 3D Reconstruction.

    PubMed

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  13. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  14. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  15. Increased Speed: 3D Silicon Sensors. Fast Current Amplifiers

    SciTech Connect

    Parker, Sherwood; Kok, Angela; Kenney, Christopher; Jarron, Pierre; Hasi, Jasmine; Despeisse, Matthieu; Da Via, Cinzia; Anelli, Giovanni; /CERN

    2012-05-07

    The authors describe techniques to make fast, sub-nanosecond time resolution solid-state detector systems using sensors with 3D electrodes, current amplifiers, constant-fraction comparators or fast wave-form recorders, and some of the next steps to reach still faster results.

  16. Investigation on the contribution of LiDAR data in 3D cadastre

    NASA Astrophysics Data System (ADS)

    Giannaka, Olga; Dimopoulou, Efi; Georgopoulos, Andreas

    2014-08-01

    The existing 2D cadastral systems worldwide cannot provide a proper registration and representation of the land ownership rights, restrictions and responsibilities in a 3D context, which appear in our complex urban environment. Ιn such instances, it may be necessary to consider the development of a 3D Cadastre in which proprietary rights acquire appropriate three-dimensional space both above and below conventional ground level. Such a system should contain the topology and the coordinates of the buildings' outlines and infrastructure. The augmented model can be formed as a full 3D Cadastre, a hybrid Cadastre or a 2D Cadastre with 3D tags. Each country has to contemplate which alternative is appropriate, depending on the specific situation, the legal framework and the available technical means. In order to generate a 3D model for cadastral purposes, a system is required which should be able to exploit and represent 3D data such as LiDAR, a remote sensing technology which acquires three-dimensional point clouds that describe the earth's surface and the objects on it. LiDAR gives a direct representation of objects on the ground surface and measures their coordinates by analyzing the reflecting light. Moreover, it provides very accurate position and height information, although direct information about the objects' geometrical shape is not conveyed. In this study, an experimental implementation of 3D Cadastre using LiDAR data is developed, in order to investigate if this information can satisfy the specifications that are set for the purposes of the Hellenic Cadastre. GIS tools have been used for analyzing DSM and true orthophotos of the study area. The results of this study are presented and evaluated in terms of usability and efficiency.

  17. Helicopter Flight Test of 3-D Imaging Flash LIDAR Technology for Safe, Autonomous, and Precise Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-01-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 micrometer Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GN&C system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of human-made geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in real-time for later reconstruction into Digital Elevation Maps (DEM's).

  18. Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing

    NASA Astrophysics Data System (ADS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-05-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GNC) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 μm Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GNC system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of humanmade geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in realtime for later reconstruction into Digital Elevation Maps (DEM's).

  19. Study on 3D CFBG vibration sensor and its application

    NASA Astrophysics Data System (ADS)

    Nan, Qiuming; Li, Sheng

    2016-03-01

    A novel variety of three dimensional (3D) vibration sensor based on chirped fiber Bragg grating (CFBG) is developed to measure 3D vibration in the mechanical equipment field. The sensor is composed of three independent vibration sensing units. Each unit uses double matched chirped gratings as sensing elements, and the sensing signal is processed by the edge filtering demodulation method. The structure and principle of the sensor are theoretically analyzed, and its performances are obtained from some experiments and the results are as follows: operating frequency range of the sensor is 10 Hz‒500 Hz; acceleration measurement range is 2 m·s-2‒30 m·s-2; sensitivity is about 70 mV/m·s-2; crosstalk coefficient is greater than 22 dB; self-compensation for temperature is available. Eventually the sensor is applied to monitor the vibration state of radiation pump. Seen from its experiments and applications, the sensor has good sensing performances, which can meet a certain requirement for some engineering measurement.

  20. Qualitative and quantitative comparative analyses of 3D lidar landslide displacement field measurements

    NASA Astrophysics Data System (ADS)

    Haugen, Benjamin D.

    Landslide ground surface displacements vary at all spatial scales and are an essential component of kinematic and hazards analyses. Unfortunately, survey-based displacement measurements require personnel to enter unsafe terrain and have limited spatial resolution. And while recent advancements in LiDAR technology provide the ability remotely measure 3D landslide displacements at high spatial resolution, no single method is widely accepted. A series of qualitative metrics for comparing 3D landslide displacement field measurement methods were developed. The metrics were then applied to nine existing LiDAR techniques, and the top-ranking methods --Iterative Closest Point (ICP) matching and 3D Particle Image Velocimetry (3DPIV) -- were quantitatively compared using synthetic displacement and control survey data from a slow-moving translational landslide in north-central Colorado. 3DPIV was shown to be the most accurate and reliable point cloud-based 3D landslide displacement field measurement method, and the viability of LiDAR-based techniques for measuring 3D motion on landslides was demonstrated.

  1. 3D graph segmentation for target detection in FOPEN LiDAR data

    NASA Astrophysics Data System (ADS)

    Shorter, Nicholas; Locke, Judson; Smith, O'Neil; Keating, Emma; Smith, Philip

    2013-05-01

    A novel use of Felzenszwalb's graph based efficient image segmentation algorithm* is proposed for segmenting 3D volumetric foliage penetrating (FOPEN) Light Detection and Ranging (LiDAR) data for automated target detection. The authors propose using an approximate nearest neighbors algorithm to establish neighbors of points in 3D and thus form the graph for segmentation. Following graph formation, the angular difference in the points' estimated normal vectors is proposed for the graph edge weights. Then the LiDAR data is segmented, in 3D, and metrics are calculated from the segments to determine their geometrical characteristics and thus likelihood of being a target. Finally, the bare earth within the scene is automatically identified to avoid confusion of flat bare earth with flat targets. The segmentation, the calculated metrics, and the bare earth all culminate in a target detection system deployed for FOPEN LiDAR. General purpose graphics processing units (GPGPUs) are leveraged to reduce processing times for the approximate nearest neighbors and point normal estimation algorithms such that the application can be run in near real time. Results are presented on several data sets.

  2. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  3. Vector Acoustics, Vector Sensors, and 3D Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Lindwall, D.

    2007-12-01

    Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

  4. An omnidirectional 3D sensor with line laser scanning

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Gao, Bingtuan; Liu, Chuande; Wang, Peng; Gao, Shuanglei

    2016-09-01

    An active omnidirectional vision owns the advantages of the wide field of view (FOV) imaging, resulting in an entire 3D environment scene, which is promising in the field of robot navigation. However, the existing omnidirectional vision sensors based on line laser can measure points only located on the optical plane of the line laser beam, resulting in the low-resolution reconstruction. Whereas, to improve resolution, some other omnidirectional vision sensors with the capability of projecting 2D encode pattern from projector and curved mirror. However, the astigmatism property of curve mirror causes the low-accuracy reconstruction. To solve the above problems, a rotating polygon scanning mirror is used to scan the object in the vertical direction so that an entire profile of the observed scene can be obtained at high accuracy, without of astigmatism phenomenon. Then, the proposed method is calibrated by a conventional 2D checkerboard plate. The experimental results show that the measurement error of the 3D omnidirectional sensor is approximately 1 mm. Moreover, the reconstruction of objects with different shapes based on the developed sensor is also verified.

  5. Inflight performance of a second-generation photon-counting 3D imaging lidar

    NASA Astrophysics Data System (ADS)

    Degnan, John; Machan, Roman; Leventhal, Ed; Lawrence, David; Jodor, Gabriel; Field, Christopher

    2008-04-01

    Sigma Space Corporation has recently developed a compact 3D imaging and polarimetric lidar suitable for use in a small aircraft or mini-UAV. A frequency-doubled Nd:YAG microchip laser generates 6 microjoule, subnanosecond pulses at fire rates up to 22 kHz. A Diffractive Optical Element (DOE) breaks the 532 nm beam into a 10x10 array of Gaussian beamlets, each containing about 1 mW of laser power (50 nJ @ 20 kHz). The reflected radiation in each beamlet is imaged by the receive optics onto individual pixels of a high efficiency, 10x10 pixel, multistop detector. Each pixel is then input to one channel of a 100 channel, multistop timer demonstrated to have a 93 picosecond timing (1.4 cm range) resolution and an event recovery time of only 1.6 nsec. Thus, each green laser pulse produces a 100 pixel volumetric 3D image. The residual infrared energy at 1064 nm is used for polarimetry. The scan pattern and frequency of a dual wedge optical scanner, synchronized to the laser fire rate, are tailored to provide contiguous coverage of a ground scene in a single overflight. In both rooftop and preliminary flight tests, the lidar has produced high spatial resolution 3D images of terrain, buildings, tree structures, power lines, and bridges with a data acquisition rate up to 2.2 million multistop 3D pixels per second. Current tests are aimed at defining the lidar's ability to image through water columns and tree canopies.

  6. Utilization of 3D imaging flash lidar technology for autonomous safe landing on planetary bodies

    NASA Astrophysics Data System (ADS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrottet, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight comptuer can use the 3-D map of terain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing airctarft. The aircraft flight tests were perfomed over Moonlike terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  7. Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  8. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  9. Urban 3D GIS From LiDAR and digital aerial images

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Song, C.; Simmers, J.; Cheng, P.

    2004-05-01

    This paper presents a method, which integrates image knowledge and Light Detection And Ranging (LiDAR) point cloud data for urban digital terrain model (DTM) and digital building model (DBM) generation. The DBM is an Object-Oriented data structure, in which each building is considered as a building object, i.e., an entity of the building class. The attributes of each building include roof types, polygons of the roof surfaces, height, parameters describing the roof surfaces, and the LiDAR point array within the roof surfaces. Each polygon represents a roof surface of building. This type of data structure is flexible for adding other building attributes in future, such as texture information and wall information. Using image knowledge extracted, we developed a new method of interpolating LiDAR raw data into grid digital surface model (DSM) with considering the steep discontinuities of buildings. In this interpolation method, the LiDAR data points, which are located in the polygon of roof surfaces, first are determined, and then interpolation via planar equation is employed for grid DSM generation. The basic steps of our research are: (1) edge detection by digital image processing algorithms; (2) complete extraction of the building roof edges by digital image processing and human-computer interactive operation; (3) establishment of DBM; (4) generation of DTM by removing surface objects. Finally, we implement the above functions by MS VC++ programming. The outcome of urban 3D DSM, DTM and DBM is exported into urban database for urban 3D GIS.

  10. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  11. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  12. Compact 3D lidar based on optically coupled horizontal and vertical scanning mechanism for the autonomous navigation of robots

    NASA Astrophysics Data System (ADS)

    Lee, Min-Gu; Baeg, Seung-Ho; Lee, Ki-Min; Lee, Hae-Seok; Baeg, Moon-Hong; Park, Jong-Ok; Kim, Hong-Ki

    2011-06-01

    The purpose of this research is to develop a new 3D LIDAR sensor, named KIDAR-B25, for measuring 3D image information with high range accuracy, high speed and compact size. To measure a distance to the target object, we developed a range measurement unit, which is implemented by the direct Time-Of-Flight (TOF) method using TDC chip, a pulsed laser transmitter as an illumination source (pulse width: 10 ns, wavelength: 905 nm, repetition rate: 30kHz, peak power: 20W), and an Si APD receiver, which has high sensitivity and wide bandwidth. Also, we devised a horizontal and vertical scanning mechanism, climbing in a spiral and coupled with the laser optical path. Besides, control electronics such as the motor controller, the signal processing unit, the power distributor and so on, are developed and integrated in a compact assembly. The key point of the 3D LIDAR design proposed in this paper is to use the compact scanning mechanism, which is coupled with optical module horizontally and vertically. This KIDAR-B25 has the same beam propagation axis for emitting pulse laser and receiving reflected one with no optical interference each other. The scanning performance of the KIDAR-B25 has proven with the stable operation up to 20Hz (vertical), 40Hz (horizontal) and the time is about 1.7s to reach the maximum speed. The range of vertical plane can be available up to +/-10 degree FOV (Field Of View) with a 0.25 degree angular resolution. The whole horizontal plane (360 degree) can be also available with 0.125 degree angular resolution. Since the KIDAR-B25 sensor has been planned and developed to be used in mobile robots for navigation, we conducted an outdoor test for evaluating its performance. The experimental results show that the captured 3D imaging data can be usefully applicable to the navigation of the robot for detecting and avoiding the moving objects with real time.

  13. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  14. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  15. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  16. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  17. Characterizing Vegetation 3D structure Globally using Spaceborne Lidar and Radar.

    NASA Astrophysics Data System (ADS)

    Simard, M.; Pinto, N.; Riddick, S.

    2008-12-01

    We characterized global vegetation 3D structure using ICEsat-I/Geoscience Laser Altimeter (GLAS) and improved spatial resolution using ALOS/Phased Array L-band Synthetic Aperture radar (PALSAR) data over 3 sites in the United States. GLAS is a 70m footprint lidar altimeter sampling the ground along-track every 170m with a track separation near the equator around 30km. Forest type classes were initially defined according to the Global Land Cover 2000 map (GLC2000), and 5-degree latitude intervals. This strategy enabled analysis of canopy structure as a function of land cover type and latitude. This produced an irregular grid geographically consistant with GLC2000. To estimate canopy height we removed the ground component from the lidar waveform and computed the centroid of the component due to the forest canopy. Canopy height within a grid cell was produced by computing the weighted mean of the GLAS estimates contained within that cell. The weights were used to reduce the impact of slope on Lidar height estimation errors. Slope is the single most significant source of error when estimating height with a large footprint lidar. It stretches the waveform and causes false estimates of canopy height. The Shuttle Radar Topography Mission (SRTM) elevation data was used to derive slope and weights. Thus, data points located in flat areas were assigned a higher weight than points located in slopes. For each forest type, we modeled the relationship between Lidar-estimated canopy height and five environmental variables: temperature, precipitation, slope, elevation, and anthropogenic disturbance. This ecological model was constructed using the machine learning method Random Forest, due to its flexibility and non-parametric nature. Model accuracy was calculated by subsampling the Lidar data set: using 75% of the data set to produce the map previously described and the remaining 25% for validation. This approach was chosen to characterize individual forest canopy types and their

  18. Cordless hand-held optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  19. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    NASA Astrophysics Data System (ADS)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  20. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  1. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  2. 3D Vegetation Mapping Using UAVSAR, LVIS, and LIDAR Data Acquisition Methods

    NASA Technical Reports Server (NTRS)

    Calderon, Denice

    2011-01-01

    The overarching objective of this ongoing project is to assess the role of vegetation within climate change. Forests capture carbon, a green house gas, from the atmosphere. Thus, any change, whether, natural (e.g. growth, fire, death) or due to anthropogenic activity (e.g. logging, burning, urbanization) may have a significant impact on the Earth's carbon cycle. Through the use of Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) and NASA's Laser Vegetation Imaging Sensor (LVIS), which are airborne Light Detection and Ranging (LIDAR) remote sensing technologies, we gather data to estimate the amount of carbon contained in forests and how the content changes over time. UAVSAR and LVIS sensors were sent all over the world with the objective of mapping out terrain to gather tree canopy height and biomass data; This data is in turn used to correlate vegetation with the global carbon cycle around the world.

  3. Design and performance of a fiber array coupled multi-channel photon counting, 3D imaging, airborne lidar system

    NASA Astrophysics Data System (ADS)

    Huang, Genghua; Shu, Rong; Hou, Libing; Li, Ming

    2014-06-01

    Photon counting lidar has an ultra-high sensitivity which can be hundreds even thousands of times higher than the linear detection lidar. It can significantly increase the system's capability of detection rang and imaging density, saving size and power consumings in airborne or space-borne applications. Based on Geiger-mode Si avalanche photodiodes (Si-APD), a prototype photon counting lidar which used 8 APDs coupled with a 1×8-pixel fiber array has been made in June, 2011. The experiments with static objects showed that the photon counting lidar could operate in strong solar background with 0.04 receiving photoelectrons on average. Limited by less counting times in moving platforms, the probability of detection and the 3D imaging density would be lower than that in static platforms. In this paper, a latest fiber array coupled multi-channel photon counting, 3D imaging, airborne lidar system is introduced. The correlation range receiver algorithm of photon counting 3D imaging is improved for airborne signal photon events extraction and noise filter. The 3D imaging experiments in the helicopter shows that the false alarm rate is less than 6×10-7, and the correct rate is better than 99.9% with 4 received photoelectrons and 0.7MHz system noise on average.

  4. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge

    NASA Astrophysics Data System (ADS)

    Gressin, Adrien; Mallet, Clément; Demantké, Jérôme; David, Nicolas

    2013-05-01

    Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.

  5. Study of City Landscape Heritage Using Lidar Data and 3d-City Models

    NASA Astrophysics Data System (ADS)

    Rubinowicz, P.; Czynska, K.

    2015-04-01

    In contemporary town planning protection of urban landscape is a significant issue. It regards especially those cities, where urban structures are the result of ages of evolution and layering of historical development process. Specific panoramas and other strategic views with historic city dominants can be an important part of the cultural heritage and genius loci. Other hand, protection of such expositions introduces limitations for future based city development. Digital Earth observation techniques creates new possibilities for more accurate urban studies, monitoring of urbanization processes and measuring of city landscape parameters. The paper examines possibilities of application of Lidar data and digital 3D-city models for: a) evaluation of strategic city views, b) mapping landscape absorption limits, and c) determination protection zones, where the urbanization and buildings height should be limited. In reference to this goal, the paper introduces a method of computational analysis of the city landscape called Visual Protection Surface (VPS). The method allows to emulate a virtual surface above the city including protection of a selected strategic views. The surface defines maximum height of buildings in such a way, that no new facility can be seen in any of selected views. The research includes also analyses of the quality of simulations according the form and precision of the input data: airborne Lidar / DSM model and more advanced 3D-city models (incl. semantic of the geometry, like in CityGML format). The outcome can be a support for professional planning of tall building development. Application of VPS method have been prepared by a computer program developed by the authors (C++). Simulations were carried out on an example of the city of Dresden.

  6. Integrating airborne LiDAR dataset and photographic images towards the construction of 3D building model

    NASA Astrophysics Data System (ADS)

    Idris, R.; Latif, Z. A.; Hamid, J. R. A.; Jaafar, J.; Ahmad, M. Y.

    2014-02-01

    A 3D building model of man-made objects is an important tool for various applications such as urban planning, flood mapping and telecommunication. The reconstruction of 3D building models remains difficult. No universal algorithms exist that can extract all objects in an image successfully. At present, advances in remote sensing such as airborne LiDAR (Light Detection and Ranging) technology have changed the conventional method of topographic mapping and increased the interest of these valued datasets towards 3D building model construction. Airborne LiDAR has proven accordingly that it can provide three dimensional (3D) information of the Earth surface with high accuracy. In this study, with the availability of open source software such as Sketch Up, LiDAR datasets and photographic images could be integrated towards the construction of a 3D building model. In order to realize the work an area comprising residential areas situated at Putrajaya in the Klang Valley region, Malaysia, covering an area of two square kilometer was chosen. The accuracy of the derived 3D building model is assessed quantitatively. It is found that the difference between the vertical height (z) of the 3D building models derived from LiDAR dataset and ground survey is approximately ± 0.09 centimeter (cm). For the horizontal component (RMSExy), the accuracy estimates derived for the 3D building models were ± 0.31m. The result also shows that the qualitative assessment of the 3D building models constructed seems feasible for the depiction in the standard of LOD 3 (Level of details).

  7. Measuring Complete 3D Vegetation Structure With Airborne Waveform Lidar: A Calibration and Validation With Terrestrial Lidar Derived Voxels

    NASA Astrophysics Data System (ADS)

    Hancock, S.; Anderson, K.; Disney, M.; Gaston, K. J.

    2015-12-01

    Accurate measurements of vegetation are vital to understand habitats and their provision of ecosystem services as well as having applications in satellite calibration, weather modelling and forestry. The majority of humans now live in urban areas and so understanding vegetation structure in these very heterogeneous areas is of importance. A number of previous studies have used airborne lidar (ALS) to characterise canopy height and canopy cover, but very few have fully characterised 3D vegetation, including understorey. Those that have either relied on leaf-off scans to allow unattenuated measurement of understorey or else did not validate. A method for creating a detailed voxel map of urban vegetation, in which the surface area of vegetation within a grid of cuboids (1.5m by 1.5m by 25 cm) is defined, from full-waveform ALS is presented. The ALS was processed with deconvolution and attenuation correction methods. The signal processing was calibrated and validated against synthetic waveforms generated from terrestrial laser scanning (TLS) data, taken as "truth". The TLS data was corrected for partial hits and attenuation using a voxel approach and these steps were validated and found to be accurate. The ALS results were benchmarked against the more common discrete return ALS products (produced automatically by the lidar manufacturer's algorithms) and Gaussian decomposition of full-waveform ALS. The true vegetation profile was accurately recreated by deconvolution. Far more detail was captured by the deconvolved waveform than either the discrete return or Gaussian decomposed ALS, particularly detail within the canopy; vital information for understanding habitats. In the paper, we will present the results with a focus on the methodological steps towards generating the voxel model, and the subsequent quantitative calibration and validation of the modelling approach using TLS. We will discuss the implications of the work for complete vegetation canopy descriptions in

  8. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  9. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  10. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  11. Automatic 3d Building Model Generation from LIDAR and Image Data Using Sequential Minimum Bounding Rectangle

    NASA Astrophysics Data System (ADS)

    Kwak, E.; Al-Durgham, M.; Habib, A.

    2012-07-01

    Digital Building Model is an important component in many applications such as city modelling, natural disaster planning, and aftermath evaluation. The importance of accurate and up-to-date building models has been discussed by many researchers, and many different approaches for efficient building model generation have been proposed. They can be categorised according to the data source used, the data processing strategy, and the amount of human interaction. In terms of data source, due to the limitations of using single source data, integration of multi-senor data is desired since it preserves the advantages of the involved datasets. Aerial imagery and LiDAR data are among the commonly combined sources to obtain 3D building models with good vertical accuracy from laser scanning and good planimetric accuracy from aerial images. The most used data processing strategies are data-driven and model-driven ones. Theoretically one can model any shape of buildings using data-driven approaches but practically it leaves the question of how to impose constraints and set the rules during the generation process. Due to the complexity of the implementation of the data-driven approaches, model-based approaches draw the attention of the researchers. However, the major drawback of model-based approaches is that the establishment of representative models involves a manual process that requires human intervention. Therefore, the objective of this research work is to automatically generate building models using the Minimum Bounding Rectangle algorithm and sequentially adjusting them to combine the advantages of image and LiDAR datasets.

  12. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  13. An efficient approach to 3D single tree-crown delineation in LiDAR data

    NASA Astrophysics Data System (ADS)

    Mongus, Domen; Žalik, Borut

    2015-10-01

    This paper proposes a new method for 3D delineation of single tree-crowns in LiDAR data by exploiting the complementaries of treetop and tree trunk detections. A unified mathematical framework is provided based on the graph theory, allowing for all the segmentations to be achieved using marker-controlled watersheds. Treetops are defined by detecting concave neighbourhoods within the canopy height model using locally fitted surfaces. These serve as markers for watershed segmentation of the canopy layer where possible oversegmentation is reduced by merging the regions based on their heights, areas, and shapes. Additional tree crowns are delineated from mid- and under-storey layers based on tree trunk detection. A new approach for estimating the verticalities of the points' distributions is proposed for this purpose. The watershed segmentation is then applied on a density function within the voxel space, while boundaries of delineated trees from the canopy layer are used to prevent the overspreading of regions. The experiments show an approximately 6% increase in the efficiency of the proposed treetop definition based on locally fitted surfaces in comparison with the traditionally used local maxima of the smoothed canopy height model. In addition, 4% increase in the efficiency is achieved by the proposed tree trunk detection. Although the tree trunk detection alone is dependent on the data density, supplementing it with the treetop detection the proposed approach is efficient even when dealing with low density point-clouds.

  14. Lidar Sensors for Autonomous Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.; Reisse, Robert A.; Pierrottet, Diego F.

    2013-01-01

    Lidar technology will play an important role in enabling highly ambitious missions being envisioned for exploration of solar system bodies. Currently, NASA is developing a set of advanced lidar sensors, under the Autonomous Landing and Hazard Avoidance (ALHAT) project, aimed at safe landing of robotic and manned vehicles at designated sites with a high degree of precision. These lidar sensors are an Imaging Flash Lidar capable of generating high resolution three-dimensional elevation maps of the terrain, a Doppler Lidar for providing precision vehicle velocity and altitude, and a Laser Altimeter for measuring distance to the ground and ground contours from high altitudes. The capabilities of these lidar sensors have been demonstrated through four helicopter and one fixed-wing aircraft flight test campaigns conducted from 2008 through 2012 during different phases of their development. Recently, prototype versions of these landing lidars have been completed for integration into a rocket-powered terrestrial free-flyer vehicle (Morpheus) being built by NASA Johnson Space Center. Operating in closed-loop with other ALHAT avionics, the viability of the lidars for future landing missions will be demonstrated. This paper describes the ALHAT lidar sensors and assesses their capabilities and impacts on future landing missions.

  15. Compact, High Energy 2-micron Coherent Doppler Wind Lidar Development for NASA's Future 3-D Winds Measurement from Space

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Koch, Grady; Yu, Jirong; Petros, Mulugeta; Beyon, Jeffrey; Kavaya, Michael J.; Trieu, Bo; Chen, Songsheng; Bai, Yingxin; Petzar, paul; Modlin, Edward A.; Barnes, Bruce W.; Demoz, Belay B.

    2010-01-01

    This paper presents an overview of 2-micron laser transmitter development at NASA Langley Research Center for coherent-detection lidar profiling of winds. The novel high-energy, 2-micron, Ho:Tm:LuLiF laser technology developed at NASA Langley was employed to study laser technology currently envisioned by NASA for future global coherent Doppler lidar winds measurement. The 250 mJ, 10 Hz laser was designed as an integral part of a compact lidar transceiver developed for future aircraft flight. Ground-based wind profiles made with this transceiver will be presented. NASA Langley is currently funded to build complete Doppler lidar systems using this transceiver for the DC-8 aircraft in autonomous operation. Recently, LaRC 2-micron coherent Doppler wind lidar system was selected to contribute to the NASA Science Mission Directorate (SMD) Earth Science Division (ESD) hurricane field experiment in 2010 titled Genesis and Rapid Intensification Processes (GRIP). The Doppler lidar system will measure vertical profiles of horizontal vector winds from the DC-8 aircraft using NASA Langley s existing 2-micron, pulsed, coherent detection, Doppler wind lidar system that is ready for DC-8 integration. The measurements will typically extend from the DC-8 to the earth s surface. They will be highly accurate in both wind magnitude and direction. Displays of the data will be provided in real time on the DC-8. The pulsed Doppler wind lidar of NASA Langley Research Center is much more powerful than past Doppler lidars. The operating range, accuracy, range resolution, and time resolution will be unprecedented. We expect the data to play a key role, combined with the other sensors, in improving understanding and predictive algorithms for hurricane strength and track. 1

  16. 3D Modeling of Landslide in Open-pit Mining on Basis of Ground-based LIDAR Data

    NASA Astrophysics Data System (ADS)

    Hu, H.; Fernandez-Steeger, T. M.; Azzam, R.; Arnhardt, C.

    2009-04-01

    Slope stability is not only an important problem which is related to production and safety in open-pit mining, but also very complex task. There are three main reasons which affect the slope stability as follows: geotechnical factors: Geological structure, lithologic characteristics, water, cohesion, friction, etc.; climate factors: Rainfall and temperature; and external factors: Open-pit mining process, explosion vibration, dynamic load, etc.. The 3rd reason, as a specially one in open-pit mining, not only causes some dynamic problems but also induces the fast geometry changing which must be considered in the following research using numerical simulation and stability analysis. Recently, LIDAR technology has been applied in many fields and places in the world wide. Ground-based LIDAR technology with high accuracy up to 3mm increasingly accommodates to monitoring landslides and detecting changing. LIDAR data collection and preprocessing research have been carried out by Department of Engineering Geology and Hydrogeology at RWTH Aachen University. LIDAR data, so-called a point-cloud of mass data in high density can be obtained in short time for the sensitive open-pit mining area by using ground-based LIDAR. To obtain a consistent surface model, it is necessary to set up multiple scans with the ground-based LIDAR. The framework of data preprocessing which can be implemented by Poly-Works is introduced as follows: gross error detection and elimination, integration of reference frame, model fusion of different scans (re-sampled in overlap region), data reduction without removing the useful information which is a challenge and research front in LIDAR data processing. After data preprocessing, 3D surface model can be directly generated in Poly-Works or generated in other software by building the triangular meshes. The 3D surface landslide model can be applied to further researches such as: real time landslide geometry monitoring due to the fast data collection and

  17. 3-D water vapor field in the atmospheric boundary layer observed with scanning differential absorption lidar

    NASA Astrophysics Data System (ADS)

    Späth, Florian; Behrendt, Andreas; Muppa, Shravan Kumar; Metzendorf, Simon; Riede, Andrea; Wulfmeyer, Volker

    2016-04-01

    High-resolution three-dimensional (3-D) water vapor data of the atmospheric boundary layer (ABL) are required to improve our understanding of land-atmosphere exchange processes. For this purpose, the scanning differential absorption lidar (DIAL) of the University of Hohenheim (UHOH) was developed as well as new analysis tools and visualization methods. The instrument determines 3-D fields of the atmospheric water vapor number density with a temporal resolution of a few seconds and a spatial resolution of up to a few tens of meters. We present three case studies from two field campaigns. In spring 2013, the UHOH DIAL was operated within the scope of the HD(CP)2 Observational Prototype Experiment (HOPE) in western Germany. HD(CP)2 stands for High Definition of Clouds and Precipitation for advancing Climate Prediction and is a German research initiative. Range-height indicator (RHI) scans of the UHOH DIAL show the water vapor heterogeneity within a range of a few kilometers up to an altitude of 2 km and its impact on the formation of clouds at the top of the ABL. The uncertainty of the measured data was assessed for the first time by extending a technique to scanning data, which was formerly applied to vertical time series. Typically, the accuracy of the DIAL measurements is between 0.5 and 0.8 g m-3 (or < 6 %) within the ABL even during daytime. This allows for performing a RHI scan from the surface to an elevation angle of 90° within 10 min. In summer 2014, the UHOH DIAL participated in the Surface Atmosphere Boundary Layer Exchange (SABLE) campaign in southwestern Germany. Conical volume scans were made which reveal multiple water vapor layers in three dimensions. Differences in their heights in different directions can be attributed to different surface elevation. With low-elevation scans in the surface layer, the humidity profiles and gradients can be related to different land cover such as maize, grassland, and forest as well as different surface layer

  18. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  19. 3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.

    2012-12-01

    The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.

  20. Test Beam Results of 3D Silicon Pixel Sensors for the ATLAS upgrade

    SciTech Connect

    Grenier, P.; Alimonti, G.; Barbero, M.; Bates, R.; Bolle, E.; Borri, M.; Boscardin, M.; Buttar, C.; Capua, M.; Cavalli-Sforza, M.; Cobal, M.; Cristofoli, A.; Dalla Betta, G.F.; Darbo, G.; Da Via, C.; Devetak, E.; DeWilde, B.; Di Girolamo, B.; Dobos, D.; Einsweiler, K.; Esseni, D.; /Udine U. /INFN, Udine /Calabria U. /INFN, Cosenza /Barcelona, Inst. Microelectron. /Manchester U. /CERN /LBL, Berkeley /INFN, Genoa /INFN, Genoa /Udine U. /INFN, Udine /Oslo U. /ICREA, Barcelona /Barcelona, IFAE /SINTEF, Oslo /SINTEF, Oslo /SLAC /SLAC /Bergen U. /New Mexico U. /Bonn U. /SLAC /Freiburg U. /VTT Electronics, Espoo /Bonn U. /SLAC /Freiburg U. /SLAC /SINTEF, Oslo /Manchester U. /Barcelona, IFAE /Bonn U. /Bonn U. /CERN /Manchester U. /SINTEF, Oslo /Barcelona, Inst. Microelectron. /Calabria U. /INFN, Cosenza /Udine U. /INFN, Udine /Manchester U. /VTT Electronics, Espoo /Glasgow U. /Barcelona, IFAE /Udine U. /INFN, Udine /Hawaii U. /Freiburg U. /Manchester U. /Barcelona, Inst. Microelectron. /CERN /Fond. Bruno Kessler, Povo /Prague, Tech. U. /Trento U. /INFN, Trento /CERN /Oslo U. /Fond. Bruno Kessler, Povo /INFN, Genoa /INFN, Genoa /Bergen U. /New Mexico U. /Udine U. /INFN, Udine /SLAC /Oslo U. /Prague, Tech. U. /Oslo U. /Bergen U. /SUNY, Stony Brook /SLAC /Calabria U. /INFN, Cosenza /Manchester U. /Bonn U. /SUNY, Stony Brook /Manchester U. /Bonn U. /SLAC /Fond. Bruno Kessler, Povo

    2011-08-19

    Results on beam tests of 3D silicon pixel sensors aimed at the ATLAS Insertable-B-Layer and High Luminosity LHC (HL-LHC) upgrades are presented. Measurements include charge collection, tracking efficiency and charge sharing between pixel cells, as a function of track incident angle, and were performed with and without a 1.6 T magnetic field oriented as the ATLAS Inner Detector solenoid field. Sensors were bump bonded to the front-end chip currently used in the ATLAS pixel detector. Full 3D sensors, with electrodes penetrating through the entire wafer thickness and active edge, and double-sided 3D sensors with partially overlapping bias and read-out electrodes were tested and showed comparable performance. Full and partial 3D pixel detectors have been tested, with and without a 1.6T magnetic field, in high energy pion beams at the CERN SPS North Area in 2009. Sensors characteristics have been measured as a function of the beam incident angle and compared to a regular planar pixel device. Overall full and partial 3D devices have similar behavior. Magnetic field has no sizeable effect on 3D performances. Due to electrode inefficiency 3D devices exhibit some loss of tracking efficiency for normal incident tracks but recover full efficiency with tilted tracks. As expected due to the electric field configuration 3D sensors have little charge sharing between cells.

  1. 3D, Flash, Induced Current Readout for Silicon Sensors

    SciTech Connect

    Parker, Sherwood I.

    2014-06-07

    A new method for silicon microstrip and pixel detector readout using (1) 65 nm-technology current amplifers which can, for the first time with silicon microstrop and pixel detectors, have response times far shorter than the charge collection time (2) 3D trench electrodes large enough to subtend a reasonable solid angle at most track locations and so have adequate sensitivity over a substantial volume of pixel, (3) induced signals in addition to, or in place of, collected charge

  2. 3D sensor for indirect ranging with pulsed laser source

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Bellisai, S.; Villa, F.; Scarcella, C.; Bahgat Shehata, A.; Tosi, A.; Padovini, G.; Zappa, F.; Tisa, S.; Durini, D.; Weyers, S.; Brockherde, W.

    2012-10-01

    The growing interest for fast, compact and cost-effective 3D ranging imagers for automotive applications has prompted to explore many different techniques for 3D imaging and to develop new system for this propose. CMOS imagers that exploit phase-resolved techniques provide accurate 3D ranging with no complex optics and are rugged and costeffective. Phase-resolved techniques indirectly measure the round-trip return of the light emitted by a laser and backscattered from a distant target, computing the phase delay between the modulated light and the detected signal. Singlephoton detectors, with their high sensitivity, allow to actively illuminate the scene with a low power excitation (less than 10W with diffused daylight illumination). We report on a 4x4 array of CMOS SPAD (Single Photon Avalanche Diodes) designed in a high-voltage 0.35 μm CMOS technology, for pulsed modulation, in which each pixel computes the phase difference between the laser and the reflected pulse. Each pixel comprises a high-performance 30 μm diameter SPAD, an analog quenching circuit, two 9 bit up-down counters and memories to store data during the readout. The first counter counts the photons detected by the SPAD in a time window synchronous with the laser pulse and integrates the whole echoed signal. The second counter accumulates the number of photon detected in a window shifted with respect to the laser pulse, and acquires only a portion of the reflected signal. The array is readout with a global shutter architecture, using a 100 MHz clock; the maximal frame rate is 3 Mframe/s.

  3. A 3D Sensor Based on a Profilometrical Approach

    PubMed Central

    Pedraza-Ortega, Jesús Carlos; Gorrostieta-Hurtado, Efren; Delgado-Rosas, Manuel; Canchola-Magdaleno, Sandra L.; Ramos-Arreguin, Juan Manuel; Aceves Fernandez, Marco A.; Sotomayor-Olmedo, Artemio

    2009-01-01

    An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe pattern projected on it, and this pattern has a known spatial frequency and its information is used to avoid any discontinuities in the fringes with high frequency. Several computer simulations and experiments have been carried out to verify the analysis. The comparison between numerical simulations and experiments has proved the validity of this proposed method. PMID:22303176

  4. 3-D sensing with polar exponential sensor arrays

    NASA Technical Reports Server (NTRS)

    Weiman, Carl F. R.

    1988-01-01

    The present computations for three-dimensional vision involve, in such cases as those of scaling for perspective and optic flow, their reduction to additive operations by the implicit logarithmic transformation of image coordinates. Expressions for such computations are derived and applied to illustrative examples of sensor design. The advantages of polar exponential arrays over X-Y rasters for binocular vision are noted to encompass the inference of range and three-dimensional position from local image velocity without knowledge of pixel location, provided that the relative velocity of the target and sensor are known by some other means.

  5. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  6. Application of Lidar Data and 3D-City Models in Visual Impact Simulations of Tall Buildings

    NASA Astrophysics Data System (ADS)

    Czynska, K.

    2015-04-01

    The paper examines possibilities and limitations of application of Lidar data and digital 3D-city models to provide specialist urban analyses of tall buildings. The location and height of tall buildings is a subject of discussions, conflicts and controversies in many cities. The most important aspect is the visual influence of tall buildings to the city landscape, significant panoramas and other strategic city views. It is an actual issue in contemporary town planning worldwide. Over 50% of high-rise buildings on Earth were built in last 15 years. Tall buildings may be a threat especially for historically developed cities - typical for Europe. Contemporary Earth observation, more and more available Lidar scanning and 3D city models are a new tool for more accurate urban analysis of the tall buildings impact. The article presents appropriate simulation techniques, general assumption of geometric and computational algorithms - available methodologies and individual methods develop by author. The goal is to develop the geometric computation methods for GIS representation of the visual impact of a selected tall building to the structure of large city. In reference to this, the article introduce a Visual Impact Size method (VIS). Presented analyses were developed by application of airborne Lidar / DSM model and more processed models (like CityGML), containing the geometry and it's semantics. Included simulations were carried out on an example of the agglomeration of Berlin.

  7. Virtual 3D interactive system with embedded multiwavelength optical sensor array and sequential devices

    NASA Astrophysics Data System (ADS)

    Wang, Guo-Zhen; Huang, Yi-Pai; Hu, Kuo-Jui

    2012-06-01

    We proposed a virtual 3D-touch system by bare finger, which can detect the 3-axis (x, y, z) information of finger. This system has multi-wavelength optical sensor array embedded on the backplane of TFT panel and sequentail devices on the border of TFT panel. We had developed reflecting mode which can be worked by bare finger for the 3D interaction. A 4-inch mobile 3D-LCD with this proposed system was successfully been demonstrated already.

  8. Flexible 3D reconstruction method based on phase-matching in multi-sensor system.

    PubMed

    Wu, Qingyang; Zhang, Baichun; Huang, Jinhui; Wu, Zejun; Zeng, Zeng

    2016-04-01

    Considering the measuring range limitation of a single sensor system, multi-sensor system has become essential in obtaining complete image information of the object in the field of 3D image reconstruction. However, for the traditional multi-sensors worked independently in its system, there was some point in calibrating each sensor system separately. And the calibration between all single sensor systems was complicated and required a long time. In this paper, we present a flexible 3D reconstruction method based on phase-matching in multi-sensor system. While calibrating each sensor, it realizes the data registration of multi-sensor system in a unified coordinate system simultaneously. After all sensors are calibrated, the whole 3D image data directly exist in the unified coordinate system, and there is no need to calibrate the positions between sensors any more. Experimental results prove that the method is simple in operation, accurate in measurement, and fast in 3D image reconstruction. PMID:27137020

  9. Colored 3D surface reconstruction using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Guo, Lian-peng; Chen, Xiang-ning; Chen, Ying; Liu, Bin

    2015-03-01

    A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.

  10. Computing and monitoring potential of public spaces by shading analysis using 3d lidar data and advanced image analysis

    NASA Astrophysics Data System (ADS)

    Zwolinski, A.; Jarzemski, M.

    2015-04-01

    The paper regards specific context of public spaces in "shadow" of tall buildings located in European cities. Majority of tall buildings in European cities were built in last 15 years. Tall buildings appear mainly in city centres, directly at important public spaces being viable environment for inhabitants with variety of public functions (open spaces, green areas, recreation places, shops, services etc.). All these amenities and services are under direct impact of extensive shading coming from the tall buildings. The paper focuses on analyses and representation of impact of shading from tall buildings on various public spaces in cities using 3D city models. Computer environment of 3D city models in cityGML standard uses 3D LiDAR data as one of data types for definition of 3D cities. The structure of cityGML allows analytic applications using existing computer tools, as well as developing new techniques to estimate extent of shading coming from high-risers, affecting life in public spaces. These measurable shading parameters in specific time are crucial for proper functioning, viability and attractiveness of public spaces - finally it is extremely important for location of tall buildings at main public spaces in cities. The paper explores impact of shading from tall buildings in different spatial contexts on the background of using cityGML models based on core LIDAR data to support controlled urban development in sense of viable public spaces. The article is prepared within research project 2TaLL: Application of 3D Virtual City Models in Urban Analyses of Tall Buildings, realized as a part of Polish-Norway Grants.

  11. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing.

    PubMed

    Kesner, Samuel B; Howe, Robert D

    2011-07-21

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range. PMID:21874102

  12. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing

    PubMed Central

    Kesner, Samuel B.; Howe, Robert D.

    2011-01-01

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range. PMID:21874102

  13. Using 3D visual tools with LiDAR for environmental outreach

    NASA Astrophysics Data System (ADS)

    Glenn, N. F.; Mannel, S.; Ehinger, S.; Moore, C.

    2009-12-01

    The project objective is to develop visualizations using light detection and ranging (LiDAR) data and other data sources to increase community understanding of remote sensing data for earth science. These data are visualized using Google Earth and other visualization methods. Final products are delivered to K-12, state, and federal agencies to share with their students and community constituents. Once our partner agencies were identified, we utilized a survey method to better understand their technological abilities and use of visualization products. The final multimedia products include a visualization of LiDAR and well data for water quality mapping in a southeastern Idaho watershed; a tour of hydrologic points of interest in southeastern Idaho visited by thousands of people each year, and post-earthquake features near Borah Peak, Idaho. In addition to the customized multimedia materials, we developed tutorials to encourage our partners to utilize these tools with their own LiDAR and other scientific data.

  14. Beam test results of 3D silicon pixel sensors for future upgrades

    NASA Astrophysics Data System (ADS)

    Nellist, C.; Gligorova, A.; Huse, T.; Pacifico, N.; Sandaker, H.

    2013-12-01

    3D silicon has undergone an intensive beam test programme which has resulted in the successful qualification for the ATLAS Insertable B-Layer (IBL) upgrade project to be installed in 2013-2014. This paper presents selected results from this study with a focus on the final IBL test beam of 2012 where IBL prototype sensors were investigated. 3D devices were studied with 4 GeV positrons at DESY and 120 GeV pions at the SPS at CERN. Measurements include tracking efficiency, charge sharing, time over threshold and cluster size distributions as a function of incident angle for IBL 3D design sensors. Studies of 3D silicon sensors in an anti-proton beam test for the AEgIS experiment are also presented.

  15. 3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2012-01-01

    Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079

  16. 3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2012-01-01

    Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079

  17. LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction

    NASA Astrophysics Data System (ADS)

    Abdullah, S. M.; Awrangjeb, M.; Lu, G.

    2014-08-01

    Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.

  18. Uas Topographic Mapping with Velodyne LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Toth, C.; Grejner-Brzezinska, D.

    2016-06-01

    Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.

  19. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  20. Volumetric LiDAR scanning of a wind turbine wake and comparison with a 3D analytical wake model

    NASA Astrophysics Data System (ADS)

    Carbajo Fuertes, Fernando; Porté-Agel, Fernando

    2016-04-01

    A correct estimation of the future power production is of capital importance whenever the feasibility of a future wind farm is being studied. This power estimation relies mostly on three aspects: (1) a reliable measurement of the wind resource in the area, (2) a well-established power curve of the future wind turbines and, (3) an accurate characterization of the wake effects; the latter being arguably the most challenging one due to the complexity of the phenomenon and the lack of extensive full-scale data sets that could be used to validate analytical or numerical models. The current project addresses the problem of obtaining a volumetric description of a full-scale wake of a 2MW wind turbine in terms of velocity deficit and turbulence intensity using three scanning wind LiDARs and two sonic anemometers. The characterization of the upstream flow conditions is done by one scanning LiDAR and two sonic anemometers, which have been used to calculate incoming vertical profiles of horizontal wind speed, wind direction and an approximation to turbulence intensity, as well as the thermal stability of the atmospheric boundary layer. The characterization of the wake is done by two scanning LiDARs working simultaneously and pointing downstream from the base of the wind turbine. The direct LiDAR measurements in terms of radial wind speed can be corrected using the upstream conditions in order to provide good estimations of the horizontal wind speed at any point downstream of the wind turbine. All this data combined allow for the volumetric reconstruction of the wake in terms of velocity deficit as well as turbulence intensity. Finally, the predictions of a 3D analytical model [1] are compared to the 3D LiDAR measurements of the wind turbine. The model is derived by applying the laws of conservation of mass and momentum and assuming a Gaussian distribution for the velocity deficit in the wake. This model has already been validated using high resolution wind-tunnel measurements

  1. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  2. Automatic reconstruction of 3D urban landscape by computing connected regions and assigning them an average altitude from LiDAR point cloud image

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2014-10-01

    The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.

  3. 3D Wind Reconstruction and Turbulence Estimation in the Boundary Layer from Doppler Lidar Measurements using Particle Method

    NASA Astrophysics Data System (ADS)

    Rottner, L.; Baehr, C.

    2014-12-01

    Turbulent phenomena in the atmospheric boundary layer (ABL) are characterized by small spatial and temporal scales which make them difficult to observe and to model.New remote sensing instruments, like Doppler Lidar, give access to fine and high-frequency observations of wind in the ABL. This study suggests to use a method of nonlinear estimation based on these observations to reconstruct 3D wind in a hemispheric volume, and to estimate atmospheric turbulent parameters. The wind observations are associated to particle systems which are driven by a local turbulence model. The particles have both fluid and stochastic properties. Therefore, spatial averages and covariances may be deduced from the particles. Among the innovative aspects, we point out the absence of the common hypothesis of stationary-ergodic turbulence and the non-use of particle model closure hypothesis. Every time observations are available, 3D wind is reconstructed and turbulent parameters such as turbulent kinectic energy, dissipation rate, and Turbulent Intensity (TI) are provided. This study presents some results obtained using real wind measurements provided by a five lines of sight Lidar. Compared with classical methods (e.g. eddy covariance) our technic renders equivalent long time results. Moreover it provides finer and real time turbulence estimations. To assess this new method, we suggest computing independently TI using different observation types. First anemometer data are used to have TI reference.Then raw and filtered Lidar observations have also been compared. The TI obtained from raw data is significantly higher than the reference one, whereas the TI estimated with the new algorithm has the same order.In this study we have presented a new class of algorithm to reconstruct local random media. It offers a new way to understand turbulence in the ABL, in both stable or convective conditions. Later, it could be used to refine turbulence parametrization in meteorological meso-scale models.

  4. Doppler Lidar Descent Sensor for Planetary Landing

    NASA Astrophysics Data System (ADS)

    Amzajerdian, F.; Pierrottet, D. F.; Petway, L. B.; Hines, G. D.; Barnes, B. W.

    2012-06-01

    Future robotic and manned missions to Mars demand accurate knowledge of ground velocity and altitude to ensure soft landing at the designated landing location. To meet this requirement, a prototype Doppler lidar has been developed and demonstrated.

  5. A simple, low-cost conductive composite material for 3D printing of electronic sensors.

    PubMed

    Leigh, Simon J; Bradley, Robert J; Purssell, Christopher P; Billson, Duncan R; Hutchins, David A

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes. PMID:23185319

  6. A Simple, Low-Cost Conductive Composite Material for 3D Printing of Electronic Sensors

    PubMed Central

    Leigh, Simon J.; Bradley, Robert J.; Purssell, Christopher P.; Billson, Duncan R.; Hutchins, David A.

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes (‘rapid prototyping’) before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term ‘carbomorph’ and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes. PMID:23185319

  7. Airborne Coherent Lidar for Advanced In-Flight Measurements (ACLAIM) Flight Testing of the Lidar Sensor

    NASA Technical Reports Server (NTRS)

    Soreide, David C.; Bogue, Rodney K.; Ehernberger, L. J.; Hannon, Stephen M.; Bowdle, David A.

    2000-01-01

    The purpose of the ACLAIM program is ultimately to establish the viability of light detection and ranging (lidar) as a forward-looking sensor for turbulence. The goals of this flight test are to: 1) demonstrate that the ACLAIM lidar system operates reliably in a flight test environment, 2) measure the performance of the lidar as a function of the aerosol backscatter coefficient (beta), 3) use the lidar system to measure atmospheric turbulence and compare these measurements to onboard gust measurements, and 4) make measurements of the aerosol backscatter coefficient, its probability distribution and spatial distribution. The scope of this paper is to briefly describe the ACLAIM system and present examples of ACLAIM operation in flight, including comparisons with independent measurements of wind gusts, gust-induced normal acceleration, and the derived eddy dissipation rate.

  8. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  9. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  10. 3D UHDTV contents production with 2/3-inch sensor cameras

    NASA Astrophysics Data System (ADS)

    Hamacher, Alaric; Pardeshi, Sunil; Whangboo, Taeg-Keun; Kim, Sang-Il; Lee, Seung-Hyun

    2015-03-01

    Most UHDTV content is presently created using single large CMOS sensor cameras as opposed to 2/3-inch small sensor cameras, which is the standard for HD content. The consequence is a technical incompatibility that does not only affect the lenses and accessories of these cameras, but also the content creation process in 2D and 3D. While UHDTV is generally acclaimed for its superior image quality, the large sensors have introduced new constraints in the filming process. The camera sizes and lens dimensions have also introduced new obstacles for their use in 3D UHDTV production. The recent availability of UHDTV broadcast cameras with traditional 2/3-inch sensors can improve the transition towards UHDTV content creation. The following article will evaluate differences between the large-sensor UHDTV cameras and the 2/3-inch 3 CMOS solution and address 3D-specific considerations, such as possible artifacts like chromatic aberration and diffraction, which can occur when mixing HD and UHD equipment. The article will further present a workflow with solutions for shooting 3D UHDTV content on the basis of the Grass Valley LDX4K compact camera, which is the first available UHDTV camera with 2/3-inch UHDTV broadcast technology.

  11. How integrating 3D LiDAR data in the dike surveillance protocol: The French case

    NASA Astrophysics Data System (ADS)

    Bretar, F.; Mériaux, P.; Fauchard, C.

    2012-04-01

    carried out. A LiDAR system is able to acquire data on a dike structure of up to 80 km per day, which makes the use of this technique also valuable in case of emergency situations. It provides additional valuable products like precious information on dike slopes and crest or their near environment (river banks, etc.). Moreover, in case of vegetation, LiDAR data makes possible to study hidden structures or defaults from images like the erosion of riverbanks under forestry vegetation. The possibility of studying the vegetation is also of high importance: the development of woody vegetation near or onto the dike is a major risk factor. Surface singularities are often signs of disorder or suspected disorder in the dike itself: for example a subsidence or a sinkhole on a ridge may result from internal erosion collapse. Finally, high resolution topographic data contribute to build specific geomechanical model of the dike that, after incorporating data provided by geophysical and geotechnical surveys, are integrated in the calculations of the structure stability. Integrating the regular use of LiDAR data in the dike surveillance protocol is not yet operational in France. However, the high number of French stakeholders at the national level (on average, there is one stakeholder for only 8-9km of dike !) and the real added value of LiDAR data makes a spatial data infrastructure valuable (webservices for processing the data, consulting and filling the database on the field when performing the local diagnosis)

  12. 3D Scan of Ornamental Column (huabiao) Using Terrestrial LiDAR and Hand-held Imager

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Wang, C.; Xi, X.

    2015-08-01

    In ancient China, Huabiao was a type of ornamental column used to decorate important buildings. We carried out 3D scan of a Huabiao located in Peking University, China. This Huabiao was built no later than 1742. It is carved by white marble, 8 meters in height. Clouds and various postures of dragons are carved on its body. Two instruments were used to acquire the point cloud of this Huabiao, a terrestrial LiDAR (Riegl VZ-1000) and a hand-held imager (Mantis Vision F5). In this paper, the details of the experiment were described, including the differences between these two instruments, such as working principle, spatial resolution, accuracy, instrument dimension and working flow. The point clouds obtained respectively by these two instruments were compared, and the registered point cloud of Huabiao was also presented. These should be of interest and helpful for the research communities of archaeology and heritage.

  13. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  14. Characterizing the influence of surface roughness and inclination on 3D vision sensor performance

    NASA Astrophysics Data System (ADS)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Jackson, Michael R.

    2015-12-01

    This paper reports a methodology to evaluate the performance of 3D scanners, focusing on the influence of surface roughness and inclination on the number of acquired data points and measurement noise. Point clouds were captured of samples mounted on a robotic pan-tilt stage using an Ensenso active stereo 3D scanner. The samples have isotropic texture and range in surface roughness (Ra) from 0.09 to 0.46 μm. By extracting the point cloud quality indicators, point density and standard deviation, at a multitude of inclinations, maps of scanner performance are created. These maps highlight the performance envelopes of the sensor, the aim being to predict and compare scanner performance on real-world surfaces, rather than idealistic artifacts. The results highlight the need to characterize 3D vision sensors by their measurement limits as well as best-case performance, determined either by theoretical calculation or measurements in ideal circumstances.

  15. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    PubMed Central

    El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-01-01

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874

  16. 3D-FBK Pixel Sensors: Recent Beam Tests Results with Irradiated Devices

    SciTech Connect

    Micelli, A.; Helle, K.; Sandaker, H.; Stugu, B.; Barbero, M.; Hugging, F.; Karagounis, M.; Kostyukhin, V.; Kruger, H.; Tsung, J.W.; Wermes, N.; Capua, M.; Fazio, S.; Mastroberardino, A.; Susinno, G.; Gallrapp, C.; Di Girolamo, B.; Dobos, D.; La Rosa, A.; Pernegger, H.; Roe, S.; /CERN /Prague, Tech. U. /Prague, Tech. U. /Freiburg U. /Freiburg U. /Freiburg U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /Glasgow U. /Glasgow U. /Glasgow U. /Hawaii U. /Barcelona, IFAE /Barcelona, IFAE /LBL, Berkeley /Barcelona, IFAE /LBL, Berkeley /LBL, Berkeley /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /New Mexico U. /New Mexico U. /Oslo U. /Oslo U. /Oslo U. /Oslo U. /Oslo U. /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SUNY, Stony Brook /SUNY, Stony Brook /SUNY, Stony Brook /INFN, Trento /Trento U. /INFN, Trento /Trento U. /INFN, Trento /Trento U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /Barcelona, Inst. Microelectron. /Barcelona, Inst. Microelectron. /Barcelona, Inst. Microelectron. /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /SINTEF, Oslo /SINTEF, Oslo /SINTEF, Oslo /SINTEF, Oslo /VTT Electronics, Espoo /VTT Electronics, Espoo

    2012-04-30

    The Pixel Detector is the innermost part of the ATLAS experiment tracking device at the Large Hadron Collider, and plays a key role in the reconstruction of the primary vertices from the collisions and secondary vertices produced by short-lived particles. To cope with the high level of radiation produced during the collider operation, it is planned to add to the present three layers of silicon pixel sensors which constitute the Pixel Detector, an additional layer (Insertable B-Layer, or IBL) of sensors. 3D silicon sensors are one of the technologies which are under study for the IBL. 3D silicon technology is an innovative combination of very-large-scale integration and Micro-Electro-Mechanical-Systems where electrodes are fabricated inside the silicon bulk instead of being implanted on the wafer surfaces. 3D sensors, with electrodes fully or partially penetrating the silicon substrate, are currently fabricated at different processing facilities in Europe and USA. This paper reports on the 2010 June beam test results for irradiated 3D devices produced at FBK (Trento, Italy). The performance of these devices, all bump-bonded with the ATLAS pixel FE-I3 read-out chip, is compared to that observed before irradiation in a previous beam test.

  17. A volumetric sensor for real-time 3D mapping and robot navigation

    NASA Astrophysics Data System (ADS)

    Fournier, Jonathan; Ricard, Benoit; Laurendeau, Denis

    2006-05-01

    The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.

  18. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  19. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    NASA Astrophysics Data System (ADS)

    Da Vià, C.; Borri, M.; Dalla Betta, G.; Haughton, I.; Hasi, J.; Kenney, C.; Povoli, M.; Mendicino, R.

    2015-04-01

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale.

  20. Retrieval of Vegetation Structural Parameters and 3-D Reconstruction of Forest Canopies Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.; Schaaf, C.; Woodcock, C. E.; Jupp, D. L.; Culvenor, D.; Newnham, G.; Lovell, J.

    2010-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud, the lidar also provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the full return waveform sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves, trunks, and branches. Deployments in New England in 2007 and the southern Sierra Nevada of California in 2008 tested the ability of the instrument to retrieve mean tree diameter, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. Parameters retrieved from five scans located within six 1-ha stand sites matched manually-measured parameters with values of R2 = 0.94-0.99 in New England and 0.92-0.95 in the Sierra Nevada. Retrieved leaf area index (LAI) values were similar to those of LAI-2000 and hemispherical photography. In New England, an analysis of variance showed that EVI-retrieved values were not significantly different from other methods (power = 0.84 or higher). In the Sierra, R2 = 0.96 and 0.81 for hemispherical photos and LAI-2000, respectively. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. New England stand heights, obtained from foliage profiles, were not significantly different (power = 0.91) from RH100 values observed by LVIS in 2003. Three-dimensional stand reconstruction identifies one or more “hits” along the pulse path coupled with the peak return of each hit expressed as apparent reflectance. Returns are classified as trunk, leaf, or ground returns based on the shape of the return pulse and its location. These data provide a point

  1. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  2. 3-D periodic mesoporous nickel oxide for nonenzymatic uric acid sensors with improved sensitivity

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Cao, Yang; Chen, Yong; Zhou, Yang; Huang, Qingyou

    2015-12-01

    3-D periodic mesoporous nickel oxide (NiO) particles with crystalline walls have been synthesized through the microwave-assisted hard template route toward the KIT-6 silica. It was investigated as a nonenzymatic amperometric sensor for the detection of uric acid. 3-D periodic nickel oxide matrix has been obtained by the hard template route from the KIT-6 silica template. The crystalline nickel oxide belonged to the Ia3d space group, and its structure was characterized by X-ray diffraction (XRD), N2 adsorption-desorption, and transmission electron microscopy (TEM). The analysis results showed that the microwave-assisted mesoporous NiO materials were more appropriate to be electrochemical sensors than the traditional mesoporous NiO. Cyclic voltammetry (CV) revealed that 3-D periodic NiO exhibited a direct electrocatalytic activity for the oxidation of uric acid in sodium hydroxide solution. The enzyme-less amperometric sensor used in the detection of uric acid with detection limit of 0.005 μM (S/N = 3) over wide linear detection ranges up to 0.374 mM and with a high sensitivity of 756.26 μA mM-1 cm-2, and a possible mechanism was also given in the paper.

  3. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  4. Identifying Standing Dead Trees in Forest Areas Based on 3d Single Tree Detection from Full Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yao, W.; Krzystek, P.; Heurich, M.

    2012-07-01

    In forest ecology, a snag refers to a standing, partly or completely dead tree, often missing a top or most of the smaller branches. The accurate estimation of live and dead biomass in forested ecosystems is important for studies of carbon dynamics, biodiversity, and forest management. Therefore, an understanding of its availability and spatial distribution is required. So far, LiDAR remote sensing has been successfully used to assess live trees and their biomass, but studies focusing on dead trees are rare. The paper develops a methodology for retrieving individual dead trees in a mixed mountain forest using features that are derived from small-footprint airborne full waveform LIDAR data. First, 3D coordinates of the laser beam reflections, the pulse intensity and width are extracted by waveform decomposition. Secondly, 3D single trees are detected by an integrated approach, which delineates both dominate tree crowns and understory small trees in the canopy height model (CHM) using the watershed algorithm followed by applying normalized cuts segmentation to merged watershed areas. Thus, single trees can be obtained as 3D point segments associated with waveform-specific features per point. Furthermore, the tree segments are delivered to feature definition process to derive geometric and reflectional features at single tree level, e.g. volume and maximal diameter of crown, mean intensity, gap fraction, etc. Finally, the spanned feature space for the tree segments is forwarded to a binary classifier using support vector machine (SVM) in order to discriminate dead trees from the living ones. The methodology is applied to datasets that have been captured with the Riegl LMSQ560 laser scanner at a point density of 25 points/m2 in the Bavarian Forest National Park, Germany, respectively under leaf-on and leaf-off conditions for Norway spruces, European beeches and Sycamore maples. The classification experiments lead in the best case to an overall accuracy of 73% in a leaf

  5. NASA DC-8 Airborne Scanning Lidar Sensor Development

    NASA Technical Reports Server (NTRS)

    Nielsen, Norman B.; Uthe, Edward E.; Kaiser, Robert D.; Tucker, Michael A.; Baloun, James E.; Gorordo, Javier G.

    1996-01-01

    The NASA DC-8 aircraft is used to support a variety of in-situ and remote sensors for conducting environmental measurements over global regions. As part of the atmospheric effects of aviation program (AEAP) the DC-8 is scheduled to conduct atmospheric aerosol and gas chemistry and radiation measurements of subsonic aircraft contrails and cirrus clouds. A scanning lidar system is being developed for installation on the DC-8 to support and extend the domain of the AEAP measurements. Design and objectives of the DC-8 scanning lidar are presented.

  6. NASA DC-8 airborne scanning LIDAR sensor development

    SciTech Connect

    Nielsen, N.B.; Uthe, E.E.; Kaiser, R.D.

    1996-11-01

    The NASA DC-8 aircraft is used to support a variety of in-situ and remote sensors for conducting environmental measurements over global regions. As part of the atmospheric effects of aviation program (AEAP) the DC-8 is scheduled to conduct atmospheric aerosol and gas chemistry and radiation measurements of subsonic aircraft contrails and cirrus clouds. A scanning lidar system is being developed for installation on the DC-8 to support and extend the domain of the AEAP measurements. Design and objectives of the DC-8 scanning lidar are presented. 4 figs.

  7. Second generation airborne 3D imaging lidars based on photon counting

    NASA Astrophysics Data System (ADS)

    Degnan, John J.; Wells, David; Machan, Roman; Leventhal, Edward

    2007-09-01

    The first successful photon-counting airborne laser altimeter was demonstrated in 2001 under NASA's Instrument Incubator Program (IIP). This "micro-altimeter" flew at altitudes up to 22,000 ft (6.7 km) and, using single photon returns in daylight, successfully recorded high resolution images of the underlying topography including soil, low-lying vegetation, tree canopies, water surfaces, man-made structures, ocean waves, and moving vehicles. The lidar, which operated at a wavelength of 532 nm near the peak of the solar irradiance curve, was also able to see the underlying terrain through trees and thick atmospheric haze and performed shallow water bathymetry to depths of a few meters over the Atlantic Ocean and Assawoman Bay off the Virginia coast. Sigma Space Corporation has recently developed second generation systems suitable for use in a small aircraft or mini UAV. A frequency-doubled Nd:YAG microchip laser generates few microjoule, subnanosecond pulses at fire rates up to 22 kHz. A Diffractive Optical Element (DOE) breaks the transmit beam into a 10x10 array of quasi-uniform spots which are imaged by the receive optics onto individual anodes of a high efficiency 10x10 GaAsP segmented anode microchannel plate photomultiplier. Each anode is input to one channel of a 100 channel, multistop timer demonstrated to have a 100 picosecond timing (1.5 cm range) resolution and an event recovery time less than 2 nsec. The pattern and frequency of a dual wedge optical scanner, synchronized to the laser fire rate, are tailored to provide contiguous coverage of a ground scene in a single overflight.

  8. A sensor skid for precise 3D modeling of production lines

    NASA Astrophysics Data System (ADS)

    Elseberg, J.; Borrmann, D.; Schauer, J.; Nüchter, A.; Koriath, D.; Rautenberg, U.

    2014-05-01

    Motivated by the increasing need of rapid characterization of environments in 3D, we designed and built a sensor skid that automates the work of an operator of terrestrial laser scanners. The system combines terrestrial laser scanning with kinematic laser scanning and uses a novel semi-rigid SLAMmethod. It enables us to digitize factory environments without the need to stop production. The acquired 3D point clouds are precise and suitable to detect objects that collide with items moved along the production line.

  9. Tracking naturally occurring indoor features in 2-D and 3-D with lidar range/amplitude data

    SciTech Connect

    Adams, M.D.; Kerstens, A.

    1998-09-01

    Sensor-data processing for the interpretation of a mobile robot`s indoor environment, and the manipulation of this data for reliable localization, are still some of the most important issues in robotics. This article presents algorithms that determine the true position of a mobile robot, based on real 2-D and 3-D optical range and intensity data. The authors start with the physics of the particular type of sensor used, so that the extraction of reliable and repeatable information (namely, edge coordinates) can be determined, taking into account the noise associated with each range sample and the possibility of optical multiple-path effects. Again, applying the physical model of the sensor, the estimated positions of the mobile robot and the uncertainty in these positions are determined. They demonstrate real experiments using 2-D and 3-D scan data taken in indoor environments. To update the robot`s position reliably, the authors address the problem of matching the information recorded in a scan to, first, an a priori map, and second, to information recorded in previous scans, eliminating the need for an a priori map.

  10. Nodes Localization in 3D Wireless Sensor Networks Based on Multidimensional Scaling Algorithm

    PubMed Central

    2014-01-01

    In the recent years, there has been a huge advancement in wireless sensor computing technology. Today, wireless sensor network (WSN) has become a key technology for different types of smart environment. Nodes localization in WSN has arisen as a very challenging problem in the research community. Most of the applications for WSN are not useful without a priory known nodes positions. Adding GPS receivers to each node is an expensive solution and inapplicable for indoor environments. In this paper, we implemented and evaluated an algorithm based on multidimensional scaling (MDS) technique for three-dimensional (3D) nodes localization in WSN using improved heuristic method for distance calculation. Using extensive simulations we investigated our approach regarding various network parameters. We compared the results from the simulations with other approaches for 3D-WSN localization and showed that our approach outperforms other techniques in terms of accuracy.

  11. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  12. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate. PMID:25375758

  13. A 3D Model of the Thermoelectric Microwave Power Sensor by MEMS Technology.

    PubMed

    Yi, Zhenxiang; Liao, Xiaoping

    2016-01-01

    In this paper, a novel 3D model is proposed to describe the temperature distribution of the thermoelectric microwave power sensor. In this 3D model, the heat flux density decreases from the upper surface to the lower surface of the GaAs substrate while it was supposed to be a constant in the 2D model. The power sensor is fabricated by a GaAs monolithic microwave integrated circuit (MMIC) process and micro-electro-mechanical system (MEMS) technology. The microwave performance experiment shows that the S11 is less than -26 dB over the frequency band of 1-10 GHz. The power response experiment demonstrates that the output voltage increases from 0 mV to 27 mV, while the incident power varies from 1 mW to 100 mW. The measured sensitivity is about 0.27 mV/mW, and the calculated result from the 3D model is 0.28 mV/mW. The relative error has been reduced from 7.5% of the 2D model to 3.7% of the 3D model. PMID:27338395

  14. A 3D Model of the Thermoelectric Microwave Power Sensor by MEMS Technology

    PubMed Central

    Yi, Zhenxiang; Liao, Xiaoping

    2016-01-01

    In this paper, a novel 3D model is proposed to describe the temperature distribution of the thermoelectric microwave power sensor. In this 3D model, the heat flux density decreases from the upper surface to the lower surface of the GaAs substrate while it was supposed to be a constant in the 2D model. The power sensor is fabricated by a GaAs monolithic microwave integrated circuit (MMIC) process and micro-electro-mechanical system (MEMS) technology. The microwave performance experiment shows that the S11 is less than −26 dB over the frequency band of 1–10 GHz. The power response experiment demonstrates that the output voltage increases from 0 mV to 27 mV, while the incident power varies from 1 mW to 100 mW. The measured sensitivity is about 0.27 mV/mW, and the calculated result from the 3D model is 0.28 mV/mW. The relative error has been reduced from 7.5% of the 2D model to 3.7% of the 3D model. PMID:27338395

  15. New light sources and sensors for active optical 3D inspection

    NASA Astrophysics Data System (ADS)

    Osten, Wolfgang; Jueptner, Werner P. O.

    1999-11-01

    The implementation of active processing strategies in optical 3D-inspection needs the availability of flexible hardware solutions. The system components illumination and sensor/detector are actively involved in the processing chain by a feedback loop that is controlled by the evaluation process. Therefore this article deals with new light sources and sensor which appeared recently on the market and can be applied successfully for the implementation of active processing principles. Some applications where such new components are used to implement an active measurement strategy are presented.

  16. Comprehensive nanostructure and defect analysis using a simple 3D light-scatter sensor.

    PubMed

    Herffurth, Tobias; Schröder, Sven; Trost, Marcus; Duparré, Angela; Tünnermann, Andreas

    2013-05-10

    Light scattering measurement and analysis is a powerful tool for the characterization of optical and nonoptical surfaces. A new 3D scatter measurement system based on a detector matrix is presented. A compact light-scatter sensor is used to characterize the scattering and nanostructures of surfaces and to identify the origins of anisotropic scattering features. The results from the scatter sensor are directly compared with white light interferometry to analyze surface defects as well as surface roughness and the corresponding scattering distributions. The scattering of surface defects is modeled based on the Kirchhoff integral equation and the approach of Beckmann for rough surfaces. PMID:23669841

  17. Quantification of inertial sensor-based 3D joint angle measurement accuracy using an instrumented gimbal.

    PubMed

    Brennan, A; Zhang, J; Deluzio, K; Li, Q

    2011-07-01

    This study quantified the accuracy of inertial sensors in 3D anatomical joint angle measurement with respect to an instrumented gimbal. The gimbal rotated about three axes and directly measured the angles in the ISB recommended knee joint coordinate system. Through the use of sensor attachment devices physically fixed to the gimbal, the joint angle estimation error due to sensor attachment (the inaccuracy of the sensor attachment matrix) was essentially eliminated, leaving only error due to the inertial sensors. The angle estimation error (RMSE) corresponding to the sensor was found to be 3.20° in flexion/extension, 3.42° in abduction/adduction and 2.88° in internal/external rotation. Bland-Altman means of maximum absolute value were -1.63° inflexion/extension, 3.22° in abduction/adduction and -2.61° in internal/external rotation. The magnitude of the errors reported in this study imply that even under ideal conditions irreproducible in human gait studies, inertial angle measurement will be subject to errors of a few degrees. Conversely, the reported errors are smaller than those reported previously in human gait studies, which suggest that the sensor attachment is also significant source of error in inertial gait measurement. The proposed apparatus and methodology could be used to quantify the performance of different sensor systems and orientation estimation algorithms, and to verify experimental protocols before human experimentation. PMID:21715167

  18. DLP/DSP-based optical 3D sensors for the mass market in industrial metrology and life sciences

    NASA Astrophysics Data System (ADS)

    Frankowski, G.; Hainich, R.

    2011-03-01

    GFM has developed and constructed DLP-based optical 3D measuring devices based on structured light illumination. Over the years the devices have been used in industrial metrology and life sciences for different 3D measuring tasks. This lecture will discuss integration of DLP Pico technology and DSP technology from Texas Instruments for mass market optical 3D sensors. In comparison to existing mass market laser triangulation sensors, the new 3D sensors provide a full-field measurement of up to a million points in less than a second. The lecture will further discuss different fields of application and advantages of the new generation of 3D sensors for: OEM application in industrial measuring and inspection; 3D metrology in industry, life sciences and biometrics, and industrial image processing.

  19. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  20. Characterization of the 3D resolution of topometric sensors based on fringe and speckle pattern projection by a 3D transfer function

    NASA Astrophysics Data System (ADS)

    Berssenbrügge, Philipp; Dekiff, Markus; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2012-03-01

    The increasing importance of optical 3D measurement techniques and the growing number of available methods and systems require a fast and simple method to characterize the measurement accuracy. However, the conventional approach of comparing measured coordinates to known reference coordinates of a test target faces two major challenges: the precise fabrication of the target and - in case of pattern projecting systems - finding the position of the reference points in the obtained point cloud. The modulation transfer function (MTF) on the other hand is an established instrument to describe the resolution characteristics of 2D imaging systems. Here, the MTF concept is applied to two different topometric systems based on fringe and speckle pattern projection to obtain a 3D transfer function. We demonstrate that in the present case fringe projection provides typically 3.5 times the 3D resolution achieved with speckle pattern projection. By combining measurements of the 3D transfer function with 2D MTF measurements the dependency of 2D and 3D resolutions are characterized. We show that the method allows for a simple comparison of the 3D resolution of two 3D sensors using a low cost test target, which is easy to manufacture.

  1. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  2. Recent development of 3D imaging laser sensor in Mitsubishi Electric Corporation

    NASA Astrophysics Data System (ADS)

    Imaki, M.; Kotake, N.; Tsuji, H.; Hirai, A.; Kameyama, S.

    2013-09-01

    We have been developing 3-D imaging laser sensors for several years, because they can acquire the additional information of the scene, i.e. the range data. It enhances the potential to detect unwanted people and objects, the sensors can be utilized for applications such as safety control and security surveillance, and so forth. In this paper, we focus on two types of our sensors, which are high-frame-rate type and compact-type. To realize the high-frame-rate type system, we have developed two key devices: the linear array receiver which has 256 single InAlAs-APD detectors and the read-out IC (ROIC) array which is fabricated in SiGe-BiCMOS process, and they are connected electrically to each other. Each ROIC measures not only the intensity, but also the distance to the scene by high-speed analog signal processing. In addition, by scanning the mirror mechanically in perpendicular direction to the linear image receiver, we have realized the high speed operation, in which the frame rate is over 30 Hz and the number of pixels is 256 x 256. In the compact-type 3-D imaging laser sensor development, we have succeeded in downsizing the transmitter by scanning only the laser beam with a two-dimensional MEMS scanner. To obtain wide fieldof- view image, as well as the angle of the MEMS scanner, the receiving optical system and the large area receiver are needed. We have developed the large detecting area receiver that consists of 32 rectangular detectors, where the output signals of each detector are summed up. In this phase, our original circuit evaluates each signal level, removes the low-level signals, and sums them, in order to improve the signalto- noise ratio. In the following paper, we describe the system configurations and the recent experimental results of the two types of our 3-D imaging laser sensors.

  3. 3D heterogeneous sensor system on a chip for defense and security applications

    NASA Astrophysics Data System (ADS)

    Bhansali, Shekhar; Chapman, Glenn H.; Friedman, Eby G.; Ismail, Yehea; Mukund, P. R.; Tebbe, Dennis; Jain, Vijay K.

    2004-09-01

    This paper describes a new concept for ultra-small, ultra-compact, unattended multi-phenomenological sensor systems for rapid deployment, with integrated classification-and-decision-information extraction capability from a sensed environment. We discuss a unique approach, namely a 3-D Heterogeneous System on a Chip (HSoC) in order to achieve a minimum 10X reduction in weight, volume, and power and a 10X or greater increase in capability and reliability -- over the alternative planar approaches. These gains will accrue from (a) the avoidance of long on-chip interconnects and chip-to-chip bonding wires, and (b) the cohabitation of sensors, preprocessing analog circuitry, digital logic and signal processing, and RF devices in the same compact volume. A specific scenario is discussed in detail wherein a set of four types of sensors, namely an array of acoustic and seismic sensors, an active pixel sensor array, and an uncooled IR imaging array are placed on a common sensor plane. The other planes include an analog plane consisting of transductors and A/D converters. The digital processing planes provide the necessary processing and intelligence capability. The remaining planes provide for wireless communications/networking capability. When appropriate, this processing and decision-making will be accomplished on a collaborative basis among the distributed sensor nodes through a wireless network.

  4. Dynamic 3-D chemical agent cloud mapping using a sensor constellation deployed on mobile platforms

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Konno, Daisei; Rossi, David; Marinelli, William J.; Seem, Pete

    2014-05-01

    The need for standoff detection technology to provide early Chem-Bio (CB) threat warning is well documented. Much of the information obtained by a single passive sensor is limited to bearing and angular extent of the threat cloud. In order to obtain absolute geo-location, range to threat, 3-D extent and detailed composition of the chemical threat, fusion of information from multiple passive sensors is needed. A capability that provides on-the-move chemical cloud characterization is key to the development of real-time Battlespace Awareness. We have developed, implemented and tested algorithms and hardware to perform the fusion of information obtained from two mobile LWIR passive hyperspectral sensors. The implementation of the capability is driven by current Nuclear, Biological and Chemical Reconnaissance Vehicle operational tactics and represents a mission focused alternative of the already demonstrated 5-sensor static Range Test Validation System (RTVS).1 The new capability consists of hardware for sensor pointing and attitude information which is made available for streaming and aggregation as part of the data fusion process for threat characterization. Cloud information is generated using 2-sensor data ingested into a suite of triangulation and tomographic reconstruction algorithms. The approaches are amenable to using a limited number of viewing projections and unfavorable sensor geometries resulting from mobile operation. In this paper we describe the system architecture and present an analysis of results obtained during the initial testing of the system at Dugway Proving Ground during BioWeek 2013.

  5. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  6. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  7. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    PubMed

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  8. Advancing Lidar Sensors Technologies for Next Generation Landing Missions

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Hines, Glenn D.; Roback, Vincent E.; Petway, Larry B.; Barnes, Bruce W.; Brewster, Paul F.; Pierrottet, Diego F.; Bulyshev, Alexander

    2015-01-01

    Missions to solar systems bodies must meet increasingly ambitious objectives requiring highly reliable "precision landing", and "hazard avoidance" capabilities. Robotic missions to the Moon and Mars demand landing at pre-designated sites of high scientific value near hazardous terrain features, such as escarpments, craters, slopes, and rocks. Missions aimed at paving the path for colonization of the Moon and human landing on Mars need to execute onboard hazard detection and precision maneuvering to ensure safe landing near previously deployed assets. Asteroid missions require precision rendezvous, identification of the landing or sampling site location, and navigation to the highly dynamic object that may be tumbling at a fast rate. To meet these needs, NASA Langley Research Center (LaRC) has developed a set of advanced lidar sensors under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. These lidar sensors can provide precision measurement of vehicle relative proximity, velocity, and orientation, and high resolution elevation maps of the surface during the descent to the targeted body. Recent flights onboard Morpheus free-flyer vehicle have demonstrated the viability of ALHAT lidar sensors for future landing missions to solar system bodies.

  9. Lidar as a complementary sensor technology for harbor security

    NASA Astrophysics Data System (ADS)

    Steele, Kenneth; Ulich, Bobby; Dietz, Anthony

    2005-05-01

    A comprehensive threat detection system is needed to protect critical assets and infrastructure in the nation's harbors. Such a system must necessarily rely on a variety of sensor technologies to provide protection against airborne, submerged, and surface threats. Although many threats can be detected with current technology, which include sonar, radar, visible light, and infrared sensors, there is a substantive gap in harbor defense against quiet surface intruders such as swimmers. This threat cannot be reliably detected with current sensors. Waves and chop occlude visual detection and render sonar blind to relatively small surface objects. The dark of night is also sufficient to defeat visible light detection methods. Wetsuit materials are available that minimize the infrared signature, matching the surrounding water temperature while cloaking the body's heat. However, a range-gated lidar sensor can be used to detect the signature of the swimmer's shadow, which appears impossible to conceal because it depends only on the opaqueness of the swimmer"s body to visible light. A spatially diffuse laser pulse of short duration is used to illuminate an area of interest. The photons are forward scattered in the water, and effectively illuminate the water to an appreciable depth. By range-gated imaging of the water column beneath the swimmer, the absence of backscattered photons is manifested as a shadow in the sensor image and easily detected with existing image processing algorithms. Lidar technology can therefore close the sensor gap, complementing existing systems and providing greater security coverage. The technology has been successfully demonstrated in the detection of moored and floating sea mines, and is readily scaled to a harbor defense system consisting of a network of imaging lidars.

  10. Package analysis of 3D-printed piezoresistive strain gauge sensors

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Baptist, Joshua R.; Sahasrabuddhe, Ritvij; Lee, Woo H.; Popa, Dan O.

    2016-05-01

    Poly(3,4-ethyle- nedioxythiophene)-poly(styrenesulfonate) or PEDOT:PSS is a flexible polymer which exhibits piezo-resistive properties when subjected to structural deformation. PEDOT:PSS has a high conductivity and thermal stability which makes it an ideal candidate for use as a pressure sensor. Applications of this technology includes whole body robot skin that can increase the safety and physical collaboration of robots in close proximity to humans. In this paper, we present a finite element model of strain gauge touch sensors which have been 3D-printed onto Kapton and silicone substrates using Electro-Hydro-Dynamic ink-jetting. Simulations of the piezoresistive and structural model for the entire packaged sensor was carried out using COMSOLR , and compared with experimental results for validation. The model will be useful in designing future robot skin with predictable performances.

  11. Nonthreshold-based event detection for 3d environment monitoring in sensor networks

    SciTech Connect

    Li, M.; Liu, Y.H.; Chen, L.

    2008-12-15

    Event detection is a crucial task for wireless sensor network applications, especially environment monitoring. Existing approaches for event detection are mainly based on some predefined threshold values and, thus, are often inaccurate and incapable of capturing complex events. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds but some complex pattern in the full-scale view of the environmental data. To address this issue, we propose a nonthreshold-based approach for the real 3D sensor monitoring environment. We employ energy-efficient methods to collect a time series of data maps from the sensor network and detect complex events through matching the gathered data to spatiotemporal data patterns. Finally, we conduct trace-driven simulations to prove the efficacy and efficiency of this approach on detecting events of complex phenomena from real-life records.

  12. Distributed network of integrated 3D sensors for transportation security applications

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Garcia, Fred

    2009-05-01

    The US Port Security Agency has strongly emphasized the needs for tighter control at transportation hubs. Distributed arrays of miniature CMOS cameras are providing some solutions today. However, due to the high bandwidth required and the low valued content of such cameras (simple video feed), large computing power and analysis algorithms as well as control software are needed, which makes such an architecture cumbersome, heavy, slow and expensive. We present a novel technique by integrating cheap and mass replicable stealth 3D sensing micro-devices in a distributed network. These micro-sensors are based on conventional structures illumination via successive fringe patterns on the object to be sensed. The communication bandwidth between each sensor remains very small, but is of very high valued content. Key technologies to integrate such a sensor are digital optics and structured laser illumination.

  13. Multi-sourced, 3D geometric characterization of volcanogenic karst features: Integrating lidar, sonar, and geophysical datasets (Invited)

    NASA Astrophysics Data System (ADS)

    Sharp, J. M.; Gary, M. O.; Reyes, R.; Halihan, T.; Fairfield, N.; Stone, W. C.

    2009-12-01

    Karstic aquifers can form very complex hydrogeological systems and 3-D mapping has been difficult, but Lidar, phased array sonar, and improved earth resistivity techniques show promise in this and in linking metadata to models. Zacatón, perhaps the Earth’s deepest cenote, has a sub-aquatic void space exceeding 7.5 x 106 cubic m3. It is the focus of this study which has created detailed 3D maps of the system. These maps include data from above and beneath the the water table and within the rock matrix to document the extent of the immense karst features and to interpret the geologic processes that formed them. Phase 1 used high resolution (20 mm) Lidar scanning of surficial features of four large cenotes. Scan locations, selected to achieve full feature coverage once registered, were established atop surface benchmarks with UTM coordinates established using GPS and Total Stations. The combined datasets form a geo-registered mesh of surface features down to water level in the cenotes. Phase 2 conducted subsurface imaging using Earth Resistivity Imaging (ERI) geophysics. ERI identified void spaces isolated from open flow conduits. A unique travertine morphology exists in which some cenotes are dry or contain shallow lakes with flat travertine floors; some water-filled cenotes have flat floors without the cone of collapse material; and some have collapse cones. We hypothesize that the floors may have large water-filled voids beneath them. Three separate flat travertine caps were imaged: 1) La Pilita, which is partially open, exposing cap structure over a deep water-filled shaft; 2) Poza Seca, which is dry and vegetated; and 3) Tule, which contains a shallow (<1 m) lake. A fourth line was run adjacent to cenote Verde. La Pilita ERI, verified by SCUBA, documented the existence of large water-filled void zones ERI at Poza Seca showed a thin cap overlying a conductive zone extending to at least 25 m depth beneath the cap with no lower boundary of this zone evident

  14. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR

    PubMed Central

    Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people. PMID:26797619

  15. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR.

    PubMed

    Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people. PMID:26797619

  16. Modelling Sensor and Target effects on LiDAR Waveforms

    NASA Astrophysics Data System (ADS)

    Rosette, J.; North, P. R.; Rubio, J.; Cook, B. D.; Suárez, J.

    2010-12-01

    The aim of this research is to explore the influence of sensor characteristics and interactions with vegetation and terrain properties on the estimation of vegetation parameters from LiDAR waveforms. This is carried out using waveform simulations produced by the FLIGHT radiative transfer model which is based on Monte Carlo simulation of photon transport (North, 1996; North et al., 2010). The opportunities for vegetation analysis that are offered by LiDAR modelling are also demonstrated by other authors e.g. Sun and Ranson, 2000; Ni-Meister et al., 2001. Simulations from the FLIGHT model were driven using reflectance and transmittance properties collected from the Howland Research Forest, Maine, USA in 2003 together with a tree list for a 200m x 150m area. This was generated using field measurements of location, species and diameter at breast height. Tree height and crown dimensions of individual trees were calculated using relationships established with a competition index determined for this site. Waveforms obtained by the Laser Vegetation Imaging Sensor (LVIS) were used as validation of simulations. This provided a base from which factors such as slope, laser incidence angle and pulse width could be varied. This has enabled the effect of instrument design and laser interactions with different surface characteristics to be tested. As such, waveform simulation is relevant for the development of future satellite LiDAR sensors, such as NASA’s forthcoming DESDynI mission (NASA, 2010), which aim to improve capabilities of vegetation parameter estimation. ACKNOWLEDGMENTS We would like to thank scientists at the Biospheric Sciences Branch of NASA Goddard Space Flight Center, in particular to Jon Ranson and Bryan Blair. This work forms part of research funded by the NASA DESDynI project and the UK Natural Environment Research Council (NE/F021437/1). REFERENCES NASA, 2010, DESDynI: Deformation, Ecosystem Structure and Dynamics of Ice. http

  17. Development of scanning laser sensor for underwater 3D imaging with the coaxial optics

    NASA Astrophysics Data System (ADS)

    Ochimizu, Hideaki; Imaki, Masaharu; Kameyama, Shumpei; Saito, Takashi; Ishibashi, Shoujirou; Yoshida, Hiroshi

    2014-06-01

    We have developed the scanning laser sensor for underwater 3-D imaging which has the wide scanning angle of 120º (Horizontal) x 30º (Vertical) with the compact size of 25 cm diameter and 60 cm long. Our system has a dome lens and a coaxial optics to realize both the wide scanning angle and the compactness. The system also has the feature in the sensitivity time control (STC) circuit, in which the receiving gain is increased according to the time of flight. The STC circuit contributes to detect a small signal by suppressing the unwanted signals backscattered by marine snows. We demonstrated the system performance in the pool, and confirmed the 3-D imaging with the distance of 20 m. Furthermore, the system was mounted on the autonomous underwater vehicle (AUV), and demonstrated the seafloor mapping at the depth of 100 m in the ocean.

  18. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    PubMed Central

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  19. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  20. Research on Joint Parameter Inversion for an Integrated Underground Displacement 3D Measuring Sensor

    PubMed Central

    Shentu, Nanying; Qiu, Guohua; Li, Qing; Tong, Renyuan; Shentu, Nankai; Wang, Yanjie

    2015-01-01

    Underground displacement monitoring is a key means to monitor and evaluate geological disasters and geotechnical projects. There exist few practical instruments able to monitor subsurface horizontal and vertical displacements simultaneously due to monitoring invisibility and complexity. A novel underground displacement 3D measuring sensor had been proposed in our previous studies, and great efforts have been taken in the basic theoretical research of underground displacement sensing and measuring characteristics by virtue of modeling, simulation and experiments. This paper presents an innovative underground displacement joint inversion method by mixing a specific forward modeling approach with an approximate optimization inversion procedure. It can realize a joint inversion of underground horizontal displacement and vertical displacement for the proposed 3D sensor. Comparative studies have been conducted between the measured and inversed parameters of underground horizontal and vertical displacements under a variety of experimental and inverse conditions. The results showed that when experimentally measured horizontal displacements and vertical displacements are both varied within 0 ~ 30 mm, horizontal displacement and vertical displacement inversion discrepancies are generally less than 3 mm and 1 mm, respectively, under three kinds of simulated underground displacement monitoring circumstances. This implies that our proposed underground displacement joint inversion method is robust and efficient to predict the measuring values of underground horizontal and vertical displacements for the proposed sensor. PMID:25871714

  1. 3D shape measurements with a single interferometric sensor for in-situ lathe monitoring

    NASA Astrophysics Data System (ADS)

    Kuschmierz, R.; Huang, Y.; Czarske, J.; Metschke, S.; Löffler, F.; Fischer, A.

    2015-05-01

    Temperature drifts, tool deterioration, unknown vibrations as well as spindle play are major effects which decrease the achievable precision of computerized numerically controlled (CNC) lathes and lead to shape deviations between the processed work pieces. Since currently no measurement system exist for fast, precise and in-situ 3d shape monitoring with keyhole access, much effort has to be made to simulate and compensate these effects. Therefore we introduce an optical interferometric sensor for absolute 3d shape measurements, which was integrated into a working lathe. According to the spindle rotational speed, a measurement rate of 2,500 Hz was achieved. In-situ absolute shape, surface profile and vibration measurements are presented. While thermal drifts of the sensor led to errors of several mµm for the absolute shape, reference measurements with a coordinate machine show, that the surface profile could be measured with an uncertainty below one micron. Additionally, the spindle play of 0.8 µm was measured with the sensor.

  2. Beam test studies of 3D pixel sensors irradiated non-uniformly for the ATLAS forward physics detector

    NASA Astrophysics Data System (ADS)

    Grinstein, S.; Baselga, M.; Boscardin, M.; Christophersen, M.; Da Via, C.; Dalla Betta, G.-F.; Darbo, G.; Fadeyev, V.; Fleta, C.; Gemme, C.; Grenier, P.; Jimenez, A.; Lopez, I.; Micelli, A.; Nelist, C.; Parker, S.; Pellegrini, G.; Phlips, B.; Pohl, D.-L.; Sadrozinski, H. F.-W.; Sicho, P.; Tsiskaridze, S.

    2013-12-01

    Pixel detectors with cylindrical electrodes that penetrate the silicon substrate (so called 3D detectors) offer advantages over standard planar sensors in terms of radiation hardness, since the electrode distance is decoupled from the bulk thickness. In recent years significant progress has been made in the development of 3D sensors, which culminated in the sensor production for the ATLAS Insertable B-Layer (IBL) upgrade carried out at CNM (Barcelona, Spain) and FBK (Trento, Italy). Based on this success, the ATLAS Forward Physics (AFP) experiment has selected the 3D pixel sensor technology for the tracking detector. The AFP project presents a new challenge due to the need for a reduced dead area with respect to IBL, and the in-homogeneous nature of the radiation dose distribution in the sensor. Electrical characterization of the first AFP prototypes and beam test studies of 3D pixel devices irradiated non-uniformly are presented in this paper.

  3. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  4. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  5. An Operational Wake Vortex Sensor Using Pulsed Coherent Lidar

    NASA Technical Reports Server (NTRS)

    Barker, Ben C., Jr.; Koch, Grady J.; Nguyen, D. Chi

    1998-01-01

    NASA and FAA initiated a program in 1994 to develop methods of setting spacings for landing aircraft by incorporating information on the real-time behavior of aircraft wake vortices. The current wake separation standards were developed in the 1970's when there was relatively light airport traffic and a logical break point by which to categorize aircraft. Today's continuum of aircraft sizes and increased airport packing densities have created a need for re-evaluation of wake separation standards. The goals of this effort are to ensure that separation standards are adequate for safety and to reduce aircraft spacing for higher airport capacity. Of particular interest are the different requirements for landing under visual flight conditions and instrument flight conditions. Over the years, greater spacings have been established for instrument flight than are allowed for visual flight conditions. Preliminary studies indicate that the airline industry would save considerable money and incur fewer passenger delays if a dynamic spacing system could reduce separations at major hubs during inclement weather to the levels routinely achieved under visual flight conditions. The sensor described herein may become part of this dynamic spacing system known as the "Aircraft VOrtex Spacing System" (AVOSS) that will interface with a future air traffic control system. AVOSS will use vortex behavioral models and short-term weather prediction models in order to predict vortex behavior sufficiently into the future to allow dynamic separation standards to be generated. The wake vortex sensor will periodically provide data to validate AVOSS predictions. Feasibility of measuring wake vortices using a lidar was first demonstrated using a continuous wave (CW) system from NASA Marshall Space Flight Sensor and tested at the Volpe National Transportation Systems Center's wake vortex test site at JFK International Airport. Other applications of CW lidar for wake vortex measurement have been made

  6. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  7. Experimental Assessment of the Quanergy m8 LIDAR Sensor

    NASA Astrophysics Data System (ADS)

    Mitteta, M.-A.; Nouira, H.; Roynard, X.; Goulette, F.; Deschaud, J.-E.

    2016-06-01

    In this paper, some experiments with the Quanergy M8 scanning LIDAR system are related. The distance measurement obtained with the Quanergy M8 can be influenced by different factors. Moreover, measurement errors can originate from different sources. The environment in which the measurements are performed has an influence (temperature, light, humidity, etc.). Errors can also arise from the system itself. Then, it is necessary to determine the influence of these parameters on the quality of the distance measurements. For this purpose different studies are presented and analyzed. First, we studied the temporal stability of the sensor by analyzing observations during time. Secondly, the assessment of the distance measurement quality has been conducted. The aim of this step is to detect systematic errors in measurements regarding the range. Differents series of measurements have been conducted : at different range and in diffrent conditions (indoor and outdoor). Finally, we studied the consistency between the differents beam of the LIDAR.

  8. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  9. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    NASA Astrophysics Data System (ADS)

    Hess, M.; Robson, S.; Hosseininaveh Ahmadabadian, A.

    2014-06-01

    An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  10. Fast 3D modeling in complex environments using a single Kinect sensor

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Liu, Jingmeng

    2014-02-01

    Three-dimensional (3D) modeling technology has been widely used in inverse engineering, urban planning, robot navigation, and many other applications. How to build a dense model of the environment with limited processing resources is still a challenging topic. A fast 3D modeling algorithm that only uses a single Kinect sensor is proposed in this paper. For every color image captured by Kinect, corner feature extraction is carried out first. Then a spiral search strategy is utilized to select the region of interest (ROI) that contains enough feature corners. Next, the iterative closest point (ICP) method is applied to the points in the ROI to align consecutive data frames. Finally, the analysis of which areas can be walked through by human beings is presented. Comparative experiments with the well-known KinectFusion algorithm have been done and the results demonstrate that the accuracy of the proposed algorithm is the same as KinectFusion but the computing speed is nearly twice of KinectFusion. 3D modeling of two scenes of a public garden and traversable areas analysis in these regions further verified the feasibility of our algorithm.

  11. 3D imaging for ballistics analysis using chromatic white light sensor

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Hildebrandt, Mario; Dittmann, Jana; Clausing, Eric; Fischer, Robert; Vielhauer, Claus

    2012-03-01

    The novel application of sensing technology, based on chromatic white light (CWL), gives a new insight into ballistic analysis of cartridge cases. The CWL sensor uses a beam of white light to acquire highly detailed topography and luminance data simultaneously. The proposed 3D imaging system combines advantages of 3D and 2D image processing algorithms in order to automate the extraction of firearm specific toolmarks shaped on fired specimens. The most important characteristics of a fired cartridge case are the type of the breech face marking as well as size, shape and location of extractor, ejector and firing pin marks. The feature extraction algorithm normalizes the casing surface and consistently searches for the appropriate distortions on the rim and on the primer. The location of the firing pin mark in relation to the lateral scratches on the rim provides unique rotation invariant characteristics of the firearm mechanisms. Additional characteristics are the volume and shape of the firing pin mark. The experimental evaluation relies on the data set of 15 cartridge cases fired from three 9mm firearms of different manufacturers. The results show very high potential of 3D imaging systems for casing-based computer-aided firearm identification, which is prospectively going to support human expertise.

  12. 3D active edge silicon sensors with different electrode configurations: Radiation hardness and noise performance

    NASA Astrophysics Data System (ADS)

    Da Viá, C.; Bolle, E.; Einsweiler, K.; Garcia-Sciveres, M.; Hasi, J.; Kenney, C.; Linhart, V.; Parker, Sherwood; Pospisil, S.; Rohne, O.; Slavicek, T.; Watts, S.; Wermes, N.

    2009-06-01

    3D detectors, with electrodes penetrating the entire silicon wafer and active edges, were fabricated at the Stanford Nano Fabrication Facility (SNF), California, USA, with different electrode configurations. After irradiation with neutrons up to a fluence of 8.8×10 15 n eq cm -2, they were characterised using an infrared laser tuned to inject ˜2 minimum ionising particles showing signal efficiencies as high as 66% for the configuration with the shortest (56 μm) inter-electrode spacing. Sensors from the same wafer were also bump-bonded to the ATLAS FE-I3 pixel readout chip and their noise characterised. Most probable signal-to-noise ratios were calculated before and after irradiation to be as good as 38:1 after the highest irradiation level with a substrate thickness of 210 μm. These devices are promising candidates for application at the LHC such as the very forward detectors at ATLAS and CMS, the ATLAS B-Layer replacement and the general pixel upgrade. Moreover, 3D sensors could play a role in applications where high speed, high-resolution detectors are required, such as the vertex locators at the proposed Compact Linear Collider (CLIC) at CERN.

  13. Creation of 3D multi-body orthodontic models by using independent imaging sensors.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416

  14. 3D imaging of radiation damage in silicon sensor and spatial mapping of charge collection efficiency

    NASA Astrophysics Data System (ADS)

    Jakubek, M.; Jakubek, J.; Zemlicka, J.; Platkevic, M.; Havranek, V.; Semian, V.

    2013-03-01

    Radiation damage in semiconductor sensors alters the response and degrades the performance of many devices ultimately limiting their stability and lifetime. In semiconductor radiation detectors the homogeneity of charge collection becomes distorted while decreasing the overall detection efficiency. Moreover the damage can significantly increase the detector noise and degrade other electrical properties such as leakage current. In this work we present a novel method for 3D mapping of the semiconductor radiation sensor volume allowing displaying the three dimensional distribution of detector properties such as charge collection efficiency and charge diffusion rate. This technique can visualize the spatially localized changes of local detector performance after radiation damage. Sensors used were 300 μm and 1000 μm thick silicon bump-bonded to a Timepix readout chip which serves as an imaging multichannel microprobe (256 × 256 square pixels with pitch of 55 μm, i.e. all together 65 thousand channels). Per pixel energy sensitivity of the Timepix chip allows to evaluate the local charge collection efficiency and also the charge diffusion rate. In this work we implement an X-ray line scanning technique for systematic evaluation of changes in the performance of a silicon sensor intentionally damaged by energetic protons.

  15. A High-Resolution 3D Weather Radar, MSG, and Lightning Sensor Observation Composite

    NASA Astrophysics Data System (ADS)

    Diederich, Malte; Senf, Fabian; Wapler, Kathrin; Simmer, Clemens

    2013-04-01

    Within the research group 'Object-based Analysis and SEamless prediction' (OASE) of the Hans Ertel Centre for Weather Research programme (HerZ), a data composite containing weather radar, lightning sensor, and Meteosat Second Generation observations is being developed for the use in object-based weather analysis and nowcasting. At present, a 3D merging scheme combines measurements of the Bonn and Jülich dual polarimetric weather radar systems (data provided by the TR32 and TERENO projects) into a 3-dimensional polar-stereographic volume grid, with 500 meters horizontal, and 250 meters vertical resolution. The merging takes into account and compensates for various observational error sources, such as attenuation through hydrometeors, beam blockage through topography and buildings, minimum detectable signal as a function of noise threshold, non-hydrometeor echos like insects, and interference from other radar systems. In addition to this, the effect of convection during the radar 5-minute volume scan pattern is mitigated through calculation of advection vectors from subsequent scans and their use for advection correction when projecting the measurements into space for any desired timestamp. The Meteosat Second Generation rapid scan service provides a scan in 12 spectral visual and infrared wavelengths every 5 minutes over Germany and Europe. These scans, together with the derived microphysical cloud parameters, are projected into the same polar stereographic grid used for the radar data. Lightning counts from the LINET lightning sensor network are also provided for every 2D grid pixel. The combined 3D radar and 2D MSG/LINET data is stored in a fully documented netCDF file for every 5 minute interval, and is made ready for tracking and object based weather analysis. At the moment, the 3D data only covers the Bonn and Jülich area, but the algorithms are planed to be adapted to the newly conceived DWD polarimetric C-Band 5 minute interval volume scan strategy. An

  16. 3D silicon sensors: Design, large area production and quality assurance for the ATLAS IBL pixel detector upgrade

    NASA Astrophysics Data System (ADS)

    Da Via, Cinzia; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Darbo, Giovanni; Fleta, Celeste; Gemme, Claudia; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Chris; Kok, Angela; Parker, Sherwood; Pellegrini, Giulio; Vianello, Elisa; Zorzi, Nicola

    2012-12-01

    3D silicon sensors, where electrodes penetrate the silicon substrate fully or partially, have successfully been fabricated in different processing facilities in Europe and USA. The key to 3D fabrication is the use of plasma micro-machining to etch narrow deep vertical openings allowing dopants to be diffused in and form electrodes of pin junctions. Similar openings can be used at the sensor's edge to reduce the perimeter's dead volume to as low as ˜4 μm. Since 2009 four industrial partners of the 3D ATLAS R&D Collaboration started a joint effort aimed at one common design and compatible processing strategy for the production of 3D sensors for the LHC Upgrade and in particular for the ATLAS pixel Insertable B-Layer (IBL). In this project, aimed for installation in 2013, a new layer will be inserted as close as 3.4 cm from the proton beams inside the existing pixel layers of the ATLAS experiment. The detector proximity to the interaction point will therefore require new radiation hard technologies for both sensors and front end electronics. The latter, called FE-I4, is processed at IBM and is the biggest front end of this kind ever designed with a surface of ˜4 cm2. The performance of 3D devices from several wafers was evaluated before and after bump-bonding. Key design aspects, device fabrication plans and quality assurance tests during the 3D sensors prototyping phase are discussed in this paper.

  17. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  18. Particle-based optical pressure sensors for 3D pressure mapping.

    PubMed

    Banerjee, Niladri; Xie, Yan; Chalaseni, Sandeep; Mastrangelo, Carlos H

    2015-10-01

    This paper presents particle-based optical pressure sensors for in-flow pressure sensing, especially for microfluidic environments. Three generations of pressure sensitive particles have been developed- flat planar particles, particles with integrated retroreflectors and spherical microballoon particles. The first two versions suffer from pressure measurement dependence on particles orientation in 3D space and angle of interrogation. The third generation of microspherical particles with spherical symmetry solves these problems making particle-based manometry in microfluidic environment a viable and efficient methodology. Static and dynamic pressure measurements have been performed in liquid medium for long periods of time in a pressure range of atmospheric to 40 psi. Spherical particles with radius of 12 μm and balloon-wall thickness of 0.5 μm are effective for more than 5 h in this pressure range with an error of less than 5%. PMID:26342493

  19. Quality Assessment of 3d Reconstruction Using Fisheye and Perspective Sensors

    NASA Astrophysics Data System (ADS)

    Strecha, C.; Zoller, R.; Rutishauser, S.; Brot, B.; Schneider-Zapp, K.; Chovancova, V.; Krull, M.; Glassey, L.

    2015-03-01

    Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.

  20. Insights from a 3-D temperature sensors mooring on stratified ocean turbulence

    NASA Astrophysics Data System (ADS)

    Haren, Hans; Cimatoribus, Andrea A.; Cyr, Frédéric; Gostiaux, Louis

    2016-05-01

    A unique small-scale 3-D mooring array has been designed consisting of five parallel lines, 100 m long and 4 m apart, and holding up to 550 high-resolution temperature sensors. It is built for quantitative studies on the evolution of stratified turbulence by internal wave breaking in geophysical flows at scales which go beyond that of a laboratory. Here we present measurements from above a steep slope of Mount Josephine, NE Atlantic where internal wave breaking occurs regularly. Vertical and horizontal coherence spectra show an aspect ratio of 0.25-0.5 near the buoyancy frequency, evidencing anisotropy. At higher frequencies, the transition to isotropy (aspect ratio of 1) is found within the inertial subrange. Above the continuous turbulence spectrum in this subrange, isolated peaks are visible that locally increase the spectral width, in contrast with open ocean spectra. Their energy levels are found to be proportional to the tidal energy level.

  1. Upper Extremity 3D Reachable Workspace Assessment in ALS by Kinect sensor

    PubMed Central

    Oskarsson, Bjorn; Joyce, Nanette C.; de Bie, Evan; Nicorici, Alina; Bajcsy, Ruzena; Kurillo, Gregorij; Han, Jay J.

    2016-01-01

    Introduction Reachable workspace is a measure that provides clinically meaningful information regarding arm function. In this study, a Kinect sensor was used to determine the spectrum of 3D reachable workspace encountered in a cross-sectional cohort of individuals with ALS. Method Bilateral 3D reachable workspace was recorded from 10 subjects with ALS and 23 healthy controls. The data were normalized by each individual's arm length to obtain a reachable workspace relative surface area (RSA). Concurrent validity was assessed by correlation with ALSFRSr scores. Results The Kinect-measured reachable workspace RSA differed significantly between the ALS and control subjects (0.579±0.226 vs. 0.786±0.069; P<0.001). The RSA demonstrated correlation with ALSFRSr upper extremity items (Spearman correlation ρ=0.569; P=0.009). With worsening upper extremity function as categorized by the ALSFRSr, the reachable workspace also decreased progressively. Conclusions This study demonstrates the feasibility and potential of using a novel Kinect-based reachable workspace outcome measure in ALS. PMID:25965847

  2. The valuable use of Microsoft Kinect™ sensor 3D kinematic in the rehabilitation process in basketball

    NASA Astrophysics Data System (ADS)

    Braidot, Ariel; Favaretto, Guillermo; Frisoli, Melisa; Gemignani, Diego; Gumpel, Gustavo; Massuh, Roberto; Rayan, Josefina; Turin, Matías

    2016-04-01

    Subjects who practice sports either as professionals or amateurs, have a high incidence of knee injuries. There are a few publications that show studies from a kinematic point of view of lateral-structure-knee injuries, including meniscal (meniscal tears or chondral injury), without anterior cruciate ligament rupture. The use of standard motion capture systems for measuring outdoors sport is hard to implement due to many operative reasons. Recently released, the Microsoft Kinect™ is a sensor that was developed to track movements for gaming purposes and has seen an increased use in clinical applications. The fact that this device is a simple and portable tool allows the acquisition of data of sport common movements in the field. The development and testing of a set of protocols for 3D kinematic measurement using the Microsoft Kinect™ system is presented in this paper. The 3D kinematic evaluation algorithms were developed from information available and with the use of Microsoft’s Software Development Kit 1.8 (SDK). Along with this, an algorithm for calculating the lower limb joints angles was implemented. Thirty healthy adult volunteers were measured, using five different recording protocols for sport characteristic gestures which involve high knee injury risk in athletes.

  3. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  4. MBE based HgCdTe APDs and 3D LADAR sensors

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Asbrock, Jim; Bailey, Steven; Baley, Diane; Chapman, George; Crawford, Gina; Drafahl, Betsy; Herrin, Eileen; Kvaas, Robert; McKeag, William; Randall, Valerie; De Lyon, Terry; Hunter, Andy; Jensen, John; Roberts, Tom; Trotta, Patrick; Cook, T. Dean

    2007-04-01

    Raytheon is developing HgCdTe APD arrays and sensor chip assemblies (SCAs) for scanning and staring LADAR systems. The nonlinear characteristics of APDs operating in moderate gain mode place severe requirements on layer thickness and doping uniformity as well as defect density. MBE based HgCdTe APD arrays, engineered for high performance, meet the stringent requirements of low defects, excellent uniformity and reproducibility. In situ controls for alloy composition and substrate temperature have been implemented at HRL, LLC and Raytheon Vision Systems and enable consistent run to run results. The novel epitaxial designed using separate absorption-multiplication (SAM) architectures enables the realization of the unique advantages of HgCdTe including: tunable wavelength, low-noise, high-fill factor, low-crosstalk, and ambient operation. Focal planes built by integrating MBE detectors arrays processed in a 2 x 128 format have been integrated with 2 x 128 scanning ROIC designed. The ROIC reports both range and intensity and can detect multiple laser returns with each pixel autonomously reporting the return. FPAs show exceptionally good bias uniformity <1% at an average gain of 10. Recent breakthrough in device design has resulted in APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidth. 3D LADAR sensors utilizing these FPAs have been integrated and demonstrated both at Raytheon Missile Systems and Naval Air Warfare Center Weapons Division at China Lake. Excellent spatial and range resolution has been achieved with 3D imagery demonstrated both at short range and long range. Ongoing development under an Air Force Sponsored MANTECH program of high performance HgCdTe MBE APDs grown on large silicon wafers promise significant FPA cost reduction both by increasing the number of arrays on a given wafer and enabling automated processing.

  5. Spatio-temporal interpolation of soil moisture in 3D+T using automated sensor network data

    NASA Astrophysics Data System (ADS)

    Gasch, C.; Hengl, T.; Magney, T. S.; Brown, D. J.; Gräler, B.

    2014-12-01

    Soil sensor networks provide frequent in situ measurements of dynamic soil properties at fixed locations, producing data in 2- or 3-dimensions and through time (2D+T and 3D+T). Spatio-temporal interpolation of 3D+T point data produces continuous estimates that can then be used for prediction at unsampled times and locations, as input for process models, and can simply aid in visualization of properties through space and time. Regression-kriging with 3D and 2D+T data has successfully been implemented, but currently the field of geostatistics lacks an analytical framework for modeling 3D+T data. Our objective is to develop robust 3D+T models for mapping dynamic soil data that has been collected with high spatial and temporal resolution. For this analysis, we use data collected from a sensor network installed on the R.J. Cook Agronomy Farm (CAF), a 37-ha Long-Term Agro-Ecosystem Research (LTAR) site in Pullman, WA. For five years, the sensors have collected hourly measurements of soil volumetric water content at 42 locations and five depths. The CAF dataset also includes a digital elevation model and derivatives, a soil unit description map, crop rotations, electromagnetic induction surveys, daily meteorological data, and seasonal satellite imagery. The soil-water sensor data, combined with the spatial and temporal covariates, provide an ideal dataset for developing 3D+T models. The presentation will include preliminary results and address main implementation strategies.

  6. 3-D Flash Lidar Performance in Flight Testing on the Morpheus Autonomous, Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, Vincent E.; Amzajerdian, Farzin; Bulyshev, Alexander E.; Brewster, Paul F.; Barnes, Bruce W.

    2016-01-01

    For the first time, a 3-D imaging Flash Lidar instrument has been used in flight to scan a lunar-like hazard field, build a 3-D Digital Elevation Map (DEM), identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The flight tests served as the TRL 6 demo of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) system and included launch from NASA-Kennedy, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range. The ALHAT project developed a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar is a second generation, compact, real-time, air-cooled instrument. Based upon extensive on-ground characterization at flight ranges, the Flash Lidar was shown to be capable of imaging hazards from a slant range of 1 km with an 8 cm range precision and a range accuracy better than 35 cm, both at 1-delta. The Flash Lidar identified landing hazards as small as 30 cm from the maximum slant range which Morpheus could achieve (450 m); however, under certain wind conditions it was susceptible to scintillation arising from air heated by the rocket engine and to pre-triggering on a dust cloud created during launch and transported down-range by wind.

  7. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  8. Enhanced detection of 3D individual trees in forested areas using airborne full-waveform LiDAR data by combining normalized cuts with spatial density clustering

    NASA Astrophysics Data System (ADS)

    Yao, W.; Krzystek, P.; Heurich, M.

    2013-10-01

    A detailed understanding of the spatial distribution of forest understory is important but difficult. LiDAR remote sensing has been developing as a promising additional instrument to the conventional field work towards automated forest inventory. Unfortunately, understory (up to 50% of the top-tree height) in mixed and multilayered forests is often ignored due to a difficult observation scenario and limitation of the tree detection algorithm. Currently, the full-waveform (FWF) LiDAR with high penetration ability against overstory crowns can give us new hope to resolve the forest understory. Former approach based on 3D segmentation confirmed that the tree detection rates in both middle and lower forest layers are still low. Therefore, detecting sub-dominant and suppressed trees cannot be regarded as fully solved. In this work, we aim to improve the performance of the FWF laser scanner for the mapping of forest understory. The paper is to develop an enhanced methodology for detecting 3D individual trees by partitioning point clouds of airborne LiDAR. After extracting 3D coordinates of the laser beam echoes, the pulse intensity and width by waveform decomposition, the newly developed approach resolves 3D single trees are by an integrated approach, which delineates tree crowns by applying normalized cuts segmentation to the graph structure of local dense modes in point clouds constructed by mean shift clustering. In the context of our strategy, the mean shift clusters approximate primitives of (sub) single trees in LiDAR data and allow to define more significant features to reflect geometric and reflectional characteristics towards the single tree level. The developed methodology can be regarded as an object-based point cloud analysis approach for tree detection and is applied to datasets captured with the Riegl LMS-Q560 laser scanner at a point density of 25 points/m2 in the Bavarian Forest National Park, Germany, respectively under leaf-on and leaf-off conditions

  9. Using a magnetite/thermoplastic composite in 3D printing of direct replacements for commercially available flow sensors

    NASA Astrophysics Data System (ADS)

    Leigh, S. J.; Purssell, C. P.; Billson, D. R.; Hutchins, D. A.

    2014-09-01

    Flow sensing is an essential technique required for a wide range of application environments ranging from liquid dispensing to utility monitoring. A number of different methodologies and deployment strategies have been devised to cover the diverse range of potential application areas. The ability to easily create new bespoke sensors for new applications is therefore of natural interest. Fused deposition modelling is a 3D printing technology based upon the fabrication of 3D structures in a layer-by-layer fashion using extruded strands of molten thermoplastic. The technology was developed in the late 1980s but has only recently come to more wide-scale attention outside of specialist applications and rapid prototyping due to the advent of low-cost 3D printing platforms such as the RepRap. Due to the relatively low-cost of the printers and feedstock materials, these printers are ideal candidates for wide-scale installation as localized manufacturing platforms to quickly produce replacement parts when components fail. One of the current limitations with the technology is the availability of functional printing materials to facilitate production of complex functional 3D objects and devices beyond mere concept prototypes. This paper presents the formulation of a simple magnetite nanoparticle-loaded thermoplastic composite and its incorporation into a 3D printed flow-sensor in order to mimic the function of a commercially available flow-sensing device. Using the multi-material printing capability of the 3D printer allows a much smaller amount of functional material to be used in comparison to the commercial flow sensor by only placing the material where it is specifically required. Analysis of the printed sensor also revealed a much more linear response to increasing flow rate of water showing that 3D printed devices have the potential to at least perform as well as a conventionally produced sensor.

  10. Coherent Doppler Wind Lidar Development at NASA Langley Research Center for NASA Space-Based 3-D Winds Mission

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Kavaya, Michael J.; Yu, Jirong; Koch, Grady J.

    2012-01-01

    We review the 20-plus years of pulsed transmit laser development at NASA Langley Research Center (LaRC) to enable a coherent Doppler wind lidar to measure global winds from earth orbit. We briefly also discuss the many other ingredients needed to prepare for this space mission.

  11. Practical issues in automatic 3D reconstruction and navigation applications using man-portable or vehicle-mounted sensors

    NASA Astrophysics Data System (ADS)

    Harris, Chris; Stennett, Carl

    2012-09-01

    The navigation of an autonomous robot vehicle and person localisation in the absence of GPS both rely on using local sensors to build a model of the 3D environment. Accomplishing such capabilities is not straightforward - there are many choices to be made of sensor and processing algorithms. Roke Manor Research has broad experience in this field, gained from building and characterising real-time systems that operate in the real world. This includes developing localization for planetary and indoor rovers, model building of indoor and outdoor environments, and most recently, the building of texture-mapped 3D surface models.

  12. 3D active edge silicon sensors: Device processing, yield and QA for the ATLAS-IBL production

    NASA Astrophysics Data System (ADS)

    Da Vià, Cinzia; Boscardil, Maurizio; Dalla Betta, GianFranco; Darbo, Giovanni; Fleta, Celeste; Gemme, Claudia; Giacomini, Gabriele; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Christopher; Kok, Angela; La Rosa, Alessandro; Micelli, Andrea; Parker, Sherwood; Pellegrini, Giulio; Pohl, David-Leon; Povoli, Marco; Vianello, Elisa; Zorzi, Nicola; Watts, S. J.

    2013-01-01

    3D silicon sensors, where plasma micromachining is used to etch deep narrow apertures in the silicon substrate to form electrodes of PIN junctions, were successfully manufactured in facilities in Europe and USA. In 2011 the technology underwent a qualification process to establish its maturity for a medium scale production for the construction of a pixel layer for vertex detection, the Insertable B-Layer (IBL) at the CERN-LHC ATLAS experiment. The IBL collaboration, following that recommendation from the review panel, decided to complete the production of planar and 3D sensors and endorsed the proposal to build enough modules for a mixed IBL sensor scenario where 25% of 3D modules populate the forward and backward part of each stave. The production of planar sensors will also allow coverage of 100% of the IBL, in case that option was required. This paper will describe the processing strategy which allowed successful 3D sensor production, some of the Quality Assurance (QA) tests performed during the pre-production phase and the production yield to date.

  13. Photon counting x-ray CT with 3D holograms by CdTe line sensor

    NASA Astrophysics Data System (ADS)

    Koike, A.; Yomori, M.; Morii, H.; Neo, Y.; Aoki, T.; Mimura, H.

    2008-08-01

    The novel 3-D display system is required in the medical treatment field and non-destructive testing field. In these field, the X-ray CT system is used for obtaining 3-D information. However, there are no meaningful 3-D information in X-ray CT data, and there are also no practical 3-D display system. Therefore, in this paper, we propose an X-ray 3-D CT display system by combining a photon-counting X-ray CT system and a holographic image display system. The advantage of this system was demonstrated by comparing the holographic calculation time and recognizability of a reconstructed image.

  14. 3-D Real Time Lidar scanning measurements to assess and determine the impact of aerosol sources in highly densely-populated cities in south France

    NASA Astrophysics Data System (ADS)

    Lolli, S.; Sauvage, L.; Loaec, S.; Guinot, B.; Fofana, M.

    2009-12-01

    EZ Lidar, produced by LEOSPHERE, was deployed in several measurement campaigns from November 2008 to July 2009 in densely-populated cities of south of France to assess the impact of tunnels, highways, airports, industries and other pollution sources on aerosol emissions. PM2.5 and PM1 are particularly harmful on the population. The study and the fully characterization of pollution sources is then crucial to reduce these emission to improve air quality in large metropolitan areas. EZ Lidar is capable to produce 3-D scan almost instantaneously. The backscattered power is then range corrected and transformed in false colors from blue to red that are proportional to the aerosol density (Blue low aerosol density, red high aerosol density). Each scan can be considered as a frame, and it is superimposed to the site satellite map. The temporal sequence of the frames fully characterize the pollution sources. The paper shows the 3-D scan analysis over the selected sites and the individuation of principal emission sources and their temporal evolution.

  15. Combination of spaceborne sensor(s) and 3-D aerosol models to assess global daily near-surface air quality

    NASA Astrophysics Data System (ADS)

    Kacenelenbogen, M.; Redemann, J.; Russell, P. B.

    2009-12-01

    Aerosol Particulate Matter (PM), measured by ground-based monitoring stations, is used as a standard by the EPA (Environmental Protection Agency) to evaluate daily air quality. PM monitoring is particularly important for human health protection because the exposure to suspended particles can contribute, among others, to lung and respiratory diseases and even premature death. However, most of the PM monitoring stations are located close to cities, leaving large areas without any operational data. Satellite remote sensing is well suited for a global coverage of the aerosol load and can provide an independent and supplemental data source to in situ monitoring. Nevertheless, PM at the ground cannot easily be determined from satellite AOD (Aerosol Optical Depth) without additional information on the optical/microphysical properties and vertical distribution of the aerosols. The objective of this study is to explore the efficacy and accuracy of combining a 3-D aerosol transport model and satellite remote sensing as a cost-effective approach for estimating ground-level PM on a global and daily basis. The estimation of the near-surface PM will use the vertical distribution (and, if possible, the physicochemical properties) of the aerosols inferred from a transport model and the measured total load of particles in the atmospheric column retrieved by satellite sensor(s). The first step is to select a chemical transport model (CTM) that provides “good” simulated aerosol vertical profiles. A few global (e.g., WRF-Chem-GOCART) or regional (e.g., MM5-CMAQ, PM-CAMx) CTM will be compared during selected airborne campaigns like ARCTAS-CARB (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites- California Air Resources Board). The next step will be to devise an algorithm that combines the satellite and model data to infer PM mass estimates at the ground, after evaluating different spaceborne instruments and possible multi-sensor combinations.

  16. Piezoresistive Sensor with High Elasticity Based on 3D Hybrid Network of Sponge@CNTs@Ag NPs.

    PubMed

    Zhang, Hui; Liu, Nishuang; Shi, Yuling; Liu, Weijie; Yue, Yang; Wang, Siliang; Ma, Yanan; Wen, Li; Li, Luying; Long, Fei; Zou, Zhengguang; Gao, Yihua

    2016-08-31

    Pressure sensors with high elasticity are in great demand for the realization of intelligent sensing, but there is a need to develope a simple, inexpensive, and scalable method for the manufacture of the sensors. Here, we reported an efficient, simple, facile, and repeatable "dipping and coating" process to manufacture a piezoresistive sensor with high elasticity, based on homogeneous 3D hybrid network of carbon nanotubes@silver nanoparticles (CNTs@Ag NPs) anchored on a skeleton sponge. Highly elastic, sensitive, and wearable sensors are obtained using the porous structure of sponge and the synergy effect of CNTs/Ag NPs. Our sensor was also tested for over 2000 compression-release cycles, exhibiting excellent elasticity and cycling stability. Sensors with high performance and a simple fabrication process are promising devices for commercial production in various electronic devices, for example, sport performance monitoring and man-machine interfaces. PMID:27482721

  17. Development of 3D carbon nanotube interdigitated finger electrodes on polymer substrate for flexible capacitive sensor application

    NASA Astrophysics Data System (ADS)

    Hu, Chih-Fan; Wang, Jhih-Yu; Liu, Yu-Chia; Tsai, Ming-Han; Fang, Weileun

    2013-11-01

    This study reports a novel approach to the implementation of 3D carbon nanotube (CNT) interdigitated finger electrodes on flexible polymer, and the detection of strain, bending curvature, tactile force and proximity distance are demonstrated. The merits of the presented CNT-based flexible sensor are as follows: (1) the silicon substrate is patterned to enable the formation of 3D vertically aligned CNTs on the substrate surface; (2) polymer molding on the silicon substrate with 3D CNTs is further employed to transfer the 3D CNTs to the flexible polymer substrate; (3) the CNT-polymer composite (˜70 μm in height) is employed to form interdigitated finger electrodes to increase the sensing area and initial capacitance; (4) other structures such as electrical routings, resistors and mechanical supporters are also available using the CNT-polymer composite. The preliminary fabrication results demonstrate a flexible capacitive sensor with 50 μm high CNT interdigitated electrodes on a poly-dimethylsiloxane substrate. The tests show that the typical capacitance change is several dozens of fF and the gauge factor is in the range of 3.44-4.88 for strain and bending curvature measurement; the sensitivity of the tactile sensor is 1.11% N-1 a proximity distance near 2 mm away from the sensor can be detected.

  18. 3D imaging with a single-sided sensor: an open tomograph

    NASA Astrophysics Data System (ADS)

    Perlo, J.; Casanova, F.; Blümich, B.

    2004-02-01

    An open tomograph to image volume regions near the surface of large objects is described. The central achievement in getting such a tomograph to work is the design of a fast two-dimensional pure phase encoding imaging method to produce a cross-sectional image in the presence of highly inhomogeneous fields. The method takes advantage of the multi-echo acquisition in a Carr-Purcell-Meiboom-Gill (CPMG)-like sequence to significantly reduce the experimental time to obtain a 2D image or to spatially resolve relaxation times across the sensitive volume in a single imaging experiment. Depending on T2 the imaging time can be reduced by a factor of up to two orders of magnitude compared to the one needed by the single-echo imaging technique. The complete echo train decay has been also used to produce T2 contrast in the images and to spatially resolve the T2 distribution of an inhomogeneous object, showing that variations of structural properties like the cross-link density of rubber samples can be distinguished by this method. The sequence has been implemented on a single-sided sensor equipped with an optimized magnet geometry and a suitable gradient coil system that provides two perpendicular pulsed gradient fields. The static magnetic field defines flat planes of constant frequency parallel to the surface of the scanner that can be selected by retuning the probe frequency to achieve slice selection into the object. Combining the slice selection obtained under the presence of the static gradient of the open magnet with the two perpendicular pulsed gradient fields, 3D spatial resolution is obtained.

  19. Parallel robot for micro assembly with integrated innovative optical 3D-sensor

    NASA Astrophysics Data System (ADS)

    Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer

    2002-10-01

    Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.

  20. Using LiDAR data to measure the 3D green biomass of Beijing urban forest in China.

    PubMed

    He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu

    2013-01-01

    The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m(3), of which coniferous was 28.7871 million m(3) and broad-leaf was 370.3424 million m(3). The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities. PMID:24146792

  1. Using LiDAR Data to Measure the 3D Green Biomass of Beijing Urban Forest in China

    PubMed Central

    He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu

    2013-01-01

    The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m3, of which coniferous was 28.7871 million m3 and broad-leaf was 370.3424 million m3. The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities. PMID:24146792

  2. An Inspire-Konform 3d Building Model of Bavaria Using Cadastre Information, LIDAR and Image Matching

    NASA Astrophysics Data System (ADS)

    Roschlaub, R.; Batscheider, J.

    2016-06-01

    The federal governments of Germany endeavour to create a harmonized 3D building data set based on a common application schema (the AdV-CityGML-Profile). The Bavarian Agency for Digitisation, High-Speed Internet and Surveying has launched a statewide 3D Building Model with standardized roof shapes for all 8.1 million buildings in Bavaria. For the acquisition of the 3D Building Model LiDAR-data or data from Image Matching are used as basis in addition with the building ground plans of the official cadastral map. The data management of the 3D Building Model is carried out by a central database with the usage of a nationwide standardized CityGML-Profile of the AdV. The update of the 3D Building Model for new buildings is done by terrestrial building measurements within the maintenance process of the cadaster and from image matching. In a joint research project, the Bavarian State Agency for Surveying and Geoinformation and the TUM, Chair of Geoinformatics, transformed an AdV-CityGML-Profilebased test data set of Bavarian LoD2 building models into an INSPIRE-compliant schema. For the purpose of a transformation of such kind, the AdV provides a data specification, a test plan for 3D Building Models and a mapping table. The research project examined whether the transformation rules defined in the mapping table, were unambiguous and sufficient for implementing a transformation of LoD2 data based on the AdV-CityGML-Profile into the INSPIRE schema. The proof of concept was carried out by transforming production data of the Bavarian 3D Building Model in LoD2 into the INSPIRE BU schema. In order to assure the quality of the data to be transformed, the test specifications according to the test plan for 3D Building Models of the AdV were carried out. The AdV mapping table was checked for completeness and correctness and amendments were made accordingly.

  3. Spatial and Spectral Characterization, Mapping, and 3D Reconstructing of Ice-wedge Polygons Using High Resolution LiDAR Data

    NASA Astrophysics Data System (ADS)

    Gangodagamage, C.; Rowland, J. C.; Skurikhin, A. N.; Wilson, C. J.; Brumby, S. P.; Painter, S. L.; Gable, C. W.; Bui, Q.; Short, L. S.; Liljedahl, A.; Hubbard, S. S.; Wainwright, H. M.; Dafflon, B.; Tweedie, C. E.; Kumar, J.; Wullschleger, S. D.

    2013-12-01

    In landscapes with ice-wedge polygons, fine-scale land surface characterization is critically important because the processes that govern the carbon cycle and hydrological dynamics are controlled by features on the order of a few to tens of meters. To characterize the fine-scale features in polygonal ground in Barrow, Alaska, we use high-resolution LiDAR-derived topographic data (such as elevation, slope, curvature, and a novel 'directed distance (DD)') to develop quantitative metrics that allow for the discretization and characterization of polygons (formed by seasonal freeze and thaw processes). First, we used high resolution (0.25 m) LiDAR to show that the high and low centered polygon features exhibit a unique signature in the Fourier power spectrum where the landscape signature on freeze and thaw process (~ 5 to 100 m) is super imposed on the coarse scale fluvial eroded landscape (rudimentary river network) signature. We next convolve LiDAR elevations with multiscale wavelets and objectively choose appropriate scales to map interconnected troughs of high- and low-centered polygons. For the ice wedges where LiDAR surface expressions (troughs) are not well developed, we used a Delaunay triangulation to connect the ice-wedge network and map the topologically connected polygons. This analysis allows us to explore the 3D morphometry of these high- and low-centered polygons and develop a supervised set of ensemble characteristic templates for each polygon type as a function of directed distance (DD). These templates are used to classify the ice-wedge polygon landscape into low-centered polygons with limited troughs, and high- and low-centered polygons with well-developed trough network. We further extend the characteristic templates to polygon ensemble slopes and curvatures as a function of DD and develop a classification scheme for microtopographic features including troughs, rims, elevated ridges, and centers for both high-centered and low-centered polygon

  4. 3D sensor placement strategy using the full-range pheromone ant colony system

    NASA Astrophysics Data System (ADS)

    Shuo, Feng; Jingqing, Jia

    2016-07-01

    An optimized sensor placement strategy will be extremely beneficial to ensure the safety and cost reduction considerations of structural health monitoring (SHM) systems. The sensors must be placed such that important dynamic information is obtained and the number of sensors is minimized. The practice is to select individual sensor directions by several 1D sensor methods and the triaxial sensors are placed in these directions for monitoring. However, this may lead to non-optimal placement of many triaxial sensors. In this paper, a new method, called FRPACS, is proposed based on the ant colony system (ACS) to solve the optimal placement of triaxial sensors. The triaxial sensors are placed as single units in an optimal fashion. And then the new method is compared with other algorithms using Dalian North Bridge. The computational precision and iteration efficiency of the FRPACS has been greatly improved compared with the original ACS and EFI method.

  5. Sensor based 3D conformal cueing for safe and reliable HC operation specifically for landing in DVE

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Kress, Martin; Klasen, Stephanus

    2013-05-01

    The paper describes the approach of a sensor based landing aid for helicopters in degraded visual conditions. The system concept presented employs a long range high resolution ladar sensor allowing for identifying obstacles in the flight and in the approach path as well as measuring landing site conditions like slope, roughness and precise position relative to the helicopter during long final approach. All these measurements are visualized to the pilot. Cueing is done by 3D conformal symbology displayed in a head-tracked HMD enhanced by 2D symbols for data which is perceived easier by 2D symbols than by 3D cueing. All 3D conformal symbology is placed on the measured landing site surface which is further visualized by a grid structure for displaying landing site slope, roughness and small obstacles. Due to the limited resolution of the employed HMD a specific scheme of blending in the information during the approach is employed. The interplay between in flight and in approach obstacle warning and CFIT warning symbology with this landing aid symbology is also investigated and exemplarily evaluated for the NH90 helicopter which has already today implemented a long range high resolution ladar sensor based obstacle warning and CFIT symbology. The paper further describes the results of simulator and flight tests performed with this system employing a ladar sensor and a head-tracked head mounted display system. In the simulator trials a full model of the ladar sensor producing 3D measurement points was used working with the same algorithms used in flight tests.

  6. Diborane Electrode Response in 3D Silicon Sensors for the CMS and ATLAS Experiments

    SciTech Connect

    Brown, Emily R.; /Reed Coll. /SLAC

    2011-06-22

    Unusually high leakage currents have been measured in test wafers produced by the manufacturer SINTEF containing 3D pixel silicon sensor chips designed for the ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) experiments. Previous data has shown the CMS chips as having a lower leakage current after processing than ATLAS chips. Some theories behind the cause of the leakage currents include the dicing process and the usage of copper in bump bonding, and with differences in packaging and handling between the ATLAS and CMS chips causing the disparity between the two. Data taken at SLAC from a SINTEF wafer with electrodes doped with diborane and filled with polysilicon, before dicing, and with indium bumps added contradicts this past data, as ATLAS chips showed a lower leakage current than CMS chips. It also argues against copper in bump bonding and the dicing process as main causes of leakage current as neither were involved on this wafer. However, they still display an extremely high leakage current, with the source mostly unknown. The SINTEF wafer shows completely different behavior than the others, as the FEI3s actually performed better than the CMS chips. Therefore this data argues against the differences in packaging and handling or the intrinsic geometry of the two as a cause in the disparity between the leakage currents of the chips. Even though the leakage current in the FEI3s overall is lower, the current is still significant enough to cause problems. As this wafer was not diced, nor had it any copper added for bump bonding, this data argues against the dicing and bump bonding as causes for leakage current. To compliment this information, more data will be taken on the efficiency of the individual electrodes of the ATLAS and CMS chips on this wafer. The electrodes will be shot perpendicularly with a laser to test the efficiency across the width of the electrode. A mask with pinholes has been made to focus the laser to a beam smaller than the

  7. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm. PMID:27410800

  8. Retrieving Leaf Area Index and Foliage Profiles Through Voxelized 3-D Forest Reconstruction Using Terrestrial Full-Waveform and Dual-Wavelength Echidna Lidars

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yang, X.; Li, Z.; Schaaf, C.; Wang, Z.; Yao, T.; Zhao, F.; Saenz, E.; Paynter, I.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Martel, J.; Howe, G.; Hewawasam, K.; Jupp, D.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Measuring and monitoring canopy biophysical parameters provide a baseline for carbon flux studies related to deforestation and disturbance in forest ecosystems. Terrestrial full-waveform lidar systems, such as the Echidna Validation Instrument (EVI) and its successor Dual-Wavelength Echidna Lidar (DWEL), offer rapid, accurate, and automated characterization of forest structure. In this study, we apply a methodology based on voxelized 3-D forest reconstructions built from EVI and DWEL scans to directly estimate two important biophysical parameters: Leaf Area Index (LAI) and foliage profile. Gap probability, apparent reflectance, and volume associated with the laser pulse footprint at the observed range are assigned to the foliage scattering events in the reconstructed point cloud. Leaf angle distribution is accommodated with a simple model based on gap probability with zenith angle as observed in individual scans of the stand. The DWEL instrument, which emits simultaneous laser pulses at 1064 nm and 1548 nm wavelengths, provides a better capability to separate trunk and branch hits from foliage hits due to water absorption by leaf cellular contents at 1548 nm band. We generate voxel datasets of foliage points using a classification methodology solely based on pulse shape for scans collected by EVI and with pulse shape and band ratio for scans collected by DWEL. We then compare the LAIs and foliage profiles retrieved from the voxel datasets of the two instruments at the same red fir site in Sierra National Forest, CA, with each other and with observations from airborne and field measurements. This study further tests the voxelization methodology in obtaining LAI and foliage profiles that are largely free of clumping effects and returns from woody materials in the canopy. These retrievals can provide a valuable 'ground-truth' validation data source for large-footprint spaceborne or airborne lidar systems retrievals.

  9. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles

    NASA Astrophysics Data System (ADS)

    Yilmazoglu, O.; Yadav, S.; Cicek, D.; Schneider, J. J.

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm‑1) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ∼11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30–50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  10. Using a 2D displacement sensor to derive 3D displacement information

    NASA Technical Reports Server (NTRS)

    Soares, Schubert F. (Inventor)

    2002-01-01

    A 2D displacement sensor is used to measure displacement in three dimensions. For example, the sensor can be used in conjunction with a pulse-modulated or frequency-modulated laser beam to measure displacement caused by deformation of an antenna on which the sensor is mounted.

  11. Multipath estimation in urban environments from joint GNSS receivers and LiDAR sensors.

    PubMed

    Ali, Khurram; Chen, Xin; Dovis, Fabio; De Castro, David; Fernández, Antonio J

    2012-01-01

    In this paper, multipath error on Global Navigation Satellite System (GNSS) signals in urban environments is characterized with the help of Light Detection and Ranging (LiDAR) measurements. For this purpose, LiDAR equipment and Global Positioning System (GPS) receiver implementing a multipath estimating architecture were used to collect data in an urban environment. This paper demonstrates how GPS and LiDAR measurements can be jointly used to model the environment and obtain robust receivers. Multipath amplitude and delay are estimated by means of LiDAR feature extraction and multipath mitigation architecture. The results show the feasibility of integrating the information provided by LiDAR sensors and GNSS receivers for multipath mitigation. PMID:23202177

  12. Multipath Estimation in Urban Environments from Joint GNSS Receivers and LiDAR Sensors

    PubMed Central

    Ali, Khurram; Chen, Xin; Dovis, Fabio; De Castro, David; Fernández, Antonio J.

    2012-01-01

    In this paper, multipath error on Global Navigation Satellite System (GNSS) signals in urban environments is characterized with the help of Light Detection and Ranging (LiDAR) measurements. For this purpose, LiDAR equipment and Global Positioning System (GPS) receiver implementing a multipath estimating architecture were used to collect data in an urban environment. This paper demonstrates how GPS and LiDAR measurements can be jointly used to model the environment and obtain robust receivers. Multipath amplitude and delay are estimated by means of LiDAR feature extraction and multipath mitigation architecture. The results show the feasibility of integrating the information provided by LiDAR sensors and GNSS receivers for multipath mitigation. PMID:23202177

  13. New insights into 3D calving investigations: use of Terrestrial LiDAR for monitoring the Perito Moreno glacier front (Southern Patagonian Ice Fields, Argentina)

    NASA Astrophysics Data System (ADS)

    Abellan, Antonio; Penna, Ivanna; Daicz, Sergio; Carrea, Dario; Derron, Marc-Henri; Guerin, Antoine; Jaboyedoff, Michel

    2015-04-01

    There exists a great incertitude concerning the processes that control and lead to glaciers' fronts disintegration, including the laws and the processes governing ice calving phenomena. The record of surface processes occurring at glacier's front has proven problematic due to the highly dynamic nature of the calving phenomenon, creating a great uncertainty concerning the processes and forms controlling and leading to the occurrence of discrete calving events. For instance, some common observational errors for quantifying the sudden occurrence of the calving phenomena include the insufficient spatial and/or temporal resolution of the conventional photogrammetric techniques and satellites missions. Furthermore, a lack of high quality four dimensional data of failures is currently affecting our ability to straightforward analyse and predict the glaciers' dynamics. In order to overcome these limitations, we used a terrestrial LiDAR sensor (Optech Ilris 3D-LR) for intensively monitoring the changes occurred at one of the most impressive calving glacier fronts: the Perito Moreno glacier, located in the Southern Patagonian Ice Fields (Argentina). Using this system, we were able to capture at an unprecedented level of detail the three-dimensional geometry of the glacier's front during five days (from 10th to 14th of March 2014). Each data collection, which was acquired at a mean interval of 20 minutes each, consisted in the automatic acquisition of several million points at a mean density between 100-200 points per square meter. The maximum attainable range for the utilized wavelength of the Ilris-LR system (1064 nm) was around 500 meters over massive ice (showing no-significant loss of information), being this distance considerably reduced on crystalline or wet ice short after the occurrence of calving events. By comparing successive three-dimensional datasets, we have investigated not only the magnitude and frequency of several ice failures at the glacier's terminus, but

  14. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  15. Light-Weight Sensor Package for Precision 3d Measurement with Micro Uavs E.G. Power-Line Monitoring

    NASA Astrophysics Data System (ADS)

    Kuhnert, K.-D.; Kuhnert, L.

    2013-08-01

    The paper describes a new sensor package for micro or mini UAVs and one application that has been successfully implemented with this sensor package. It is intended for 3D measurement of landscape or large outdoor structures for mapping or monitoring purposes. The package can be composed modularly into several configurations. It may contain a laser-scanner, camera, IMU, GPS and other sensors as required by the application. Also different products of the same sensor type have been integrated. Always it contains its own computing infrastructure and may be used for intelligent navigation, too. It can be operated in cooperation with different drones but also completely independent of the type of drone it is attached to. To show the usability of the system, an application in monitoring high-voltage power lines that has been successfully realised with the package is described in detail.

  16. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  17. Rapid 3D Patterning of Poly(acrylic acid) Ionic Hydrogel for Miniature pH Sensors.

    PubMed

    Yin, Ming-Jie; Yao, Mian; Gao, Shaorui; Zhang, A Ping; Tam, Hwa-Yaw; Wai, Ping-Kong A

    2016-02-17

    Poly(acrylic acid) (PAA), as a highly ionic conductive hydrogel, can reversibly swell/deswell according to the surrounding pH conditions. An optical maskless -stereolithography technology is presented to rapidly 3D pattern PAA for device fabrication. A highly sensitive miniature pH sensor is demonstrated by in situ printing of periodic PAA micropads on a tapered optical microfiber. PMID:26643765

  18. 3-D Deformation Field Of The 2010 El Mayor-Cucapah (Mexico) Earthquake From Matching Before To After Aerial Lidar Point Clouds

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Arrowsmith, R.; Krishnan, A. K.; Saripalli, S.; Oskin, M. E.; Arregui, S. M.; Limon, J. F.

    2012-12-01

    The Mw 7.2 El Mayor-Cucapah earthquake (EMCE) of 4 April 2010 generated a ~110 km long, NW-SE trending rupture, with normal and right-lateral slip in the order of 2-3m in the Sierra Cucapah, the northern half, where the surface rupture has the most outstanding expression. Vertical and horizontal surface displacements produced by the EMCE have been addressed separately by other authors with a variety of aerial and satellite remote sensing techniques. Slip variation along fault and post-seismic scarp erosion and diffusion have been estimated in other studies using terrestrial LiDAR (TLS) on segments of the rupture. To complement these other studies, we computed the 3D deformation field by comparing pre- to post-event point clouds from aerial LiDAR surveys. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3-dimensional surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising a translation and a rotation) that best aligns the pre- to post-event points. Testing on synthetic datasets perturbed with displacements of known magnitude showed that windows with dimensions of 100-200m gave the best results for datasets with these densities. Here we present the deformation field with detailed displacements in segments of the surface rupture where its expression was recognized by ICP from the point cloud matching, mainly the scarcely vegetated Sierra Cucapah with the Borrego and Paso Superior fault segments the most outstanding, where we are able to compare our results with values measured in the field and results from TLS reported in other works. EMC simulated displacement field for a

  19. D Geological Outcrop Characterization: Automatic Detection of 3d Planes (azimuth and Dip) Using LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B.

    2016-06-01

    Terrestrial laser scanning constitutes a powerful method in spatial information data acquisition and allows for geological outcrops to be captured with high resolution and accuracy. A crucial aspect for numerous geologic applications is the extraction of rock surface orientations from the data. This paper focuses on the detection of planes in rock surface data by applying a segmentation algorithm directly to a 3D point cloud. Its performance is assessed considering (1) reduced spatial resolution of data and (2) smoothing in the course of data pre-processing. The methodology is tested on simulations of progressively reduced spatial resolution defined by varying point cloud density. Smoothing of the point cloud data is implemented by modifying the neighborhood criteria during normals estima-tion. The considerable alteration of resulting planes emphasizes the influence of smoothing on the plane detection prior to the actual segmentation. Therefore, the parameter needs to be set in accordance with individual purposes and respective scales of studies. Fur-thermore, it is concluded that the quality of segmentation results does not decline even when the data volume is significantly reduced down to 10%. The azimuth and dip values of individual segments are determined for planes fit to the points belonging to one segment. Based on these results, azimuth and dip as well as strike character of the surface planes in the outcrop are assessed. Thereby, this paper contributes to a fully automatic and straightforward workflow for a comprehensive geometric description of outcrops in 3D.

  20. Development of lidar sensor for cloud-based measurements during convective conditions

    NASA Astrophysics Data System (ADS)

    Vishnu, R.; Bhavani Kumar, Y.; Rao, T. Narayana; Nair, Anish Kumar M.; Jayaraman, A.

    2016-05-01

    Atmospheric convection is a natural phenomena associated with heat transport. Convection is strong during daylight periods and rigorous in summer months. Severe ground heating associated with strong winds experienced during these periods. Tropics are considered as the source regions for strong convection. Formation of thunder storm clouds is common during this period. Location of cloud base and its associated dynamics is important to understand the influence of convection on the atmosphere. Lidars are sensitive to Mie scattering and are the suitable instruments for locating clouds in the atmosphere than instruments utilizing the radio frequency spectrum. Thunder storm clouds are composed of hydrometers and strongly scatter the laser light. Recently, a lidar technique was developed at National Atmospheric Research Laboratory (NARL), a Department of Space (DOS) unit, located at Gadanki near Tirupati. The lidar technique employs slant path operation and provides high resolution measurements on cloud base location in real-time. The laser based remote sensing technique allows measurement of atmosphere for every second at 7.5 m range resolution. The high resolution data permits assessment of updrafts at the cloud base. The lidar also provides real-time convective boundary layer height using aerosols as the tracers of atmospheric dynamics. The developed lidar sensor is planned for up-gradation with scanning facility to understand the cloud dynamics in the spatial direction. In this presentation, we present the lidar sensor technology and utilization of its technology for high resolution cloud base measurements during convective conditions over lidar site, Gadanki.

  1. 3D Lasers Increase Efficiency, Safety of Moving Machines

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.

  2. A novel method for assessing the 3-D orientation accuracy of inertial/magnetic sensors.

    PubMed

    Faber, Gert S; Chang, Chien-Chi; Rizun, Peter; Dennerlein, Jack T

    2013-10-18

    A novel method for assessing the accuracy of inertial/magnetic sensors is presented. The method, referred to as the "residual matrix" method, is advantageous because it decouples the sensor's error with respect to Earth's gravity vector (attitude residual error: pitch and roll) from the sensor's error with respect to magnetic north (heading residual error), while remaining insensitive to singularity problems when the second Euler rotation is close to ±90°. As a demonstration, the accuracy of an inertial/magnetic sensor mounted to a participant's forearm was evaluated during a reaching task in a laboratory. Sensor orientation was measured internally (by the inertial/magnetic sensor) and externally using an optoelectronic measurement system with a marker cluster rigidly attached to the sensor's enclosure. Roll, pitch and heading residuals were calculated using the proposed novel method, as well as using a common orientation assessment method where the residuals are defined as the difference between the Euler angles measured by the inertial sensor and those measured by the optoelectronic system. Using the proposed residual matrix method, the roll and pitch residuals remained less than 1° and, as expected, no statistically significant difference between these two measures of attitude accuracy was found; the heading residuals were significantly larger than the attitude residuals but remained below 2°. Using the direct Euler angle comparison method, the residuals were in general larger due to singularity issues, and the expected significant difference between inertial/magnetic sensor attitude and heading accuracy was not present. PMID:24016678

  3. Optimized design of a LED-array-based TOF range imaging sensor for fast 3-D shape measurement

    NASA Astrophysics Data System (ADS)

    Wang, Huanqin; Wang, Ying; Xu, Jun; He, Deyong; Zhao, Tianpeng; Ming, Hai; Kong, Deyi

    2011-06-01

    A LED-array-based range imaging sensor using Time-of-Flight (TOF) distance measurement was developed to capture the depth information of three-dimensional (3-D) object. By time-division electronic scanning of the LED heterodyne phase-shift TOF range finders in array, the range images were fast obtained without any mechanical moving parts. The design of LED-array-based range imaging sensor was adequately described and a range imaging theoretical model based on photoelectric signal processing was built, which showed there was mutual restriction relationship among the measurement time of a depth pixel, the bandwidth of receiver and the sensor's signal-to-noise ratio (SNR). In order to improve the key parameters of sensor such as range resolution and measurement speed simultaneously, some optimized designs needed to be done for the proposed range imaging sensor, including choosing proper parameters for the filters in receiver, adopting special structure feedback automatic gain control (AGC) circuit with short response time, etc. The final experiment results showed the sensor after optimization could acquire the range images at a rate of 10 frames per second with a range resolution as high as +/-2mm in the range of 50-1200mm. The essential advantages of the proposed range imaging sensor were construction with simple structure, high range resolution, short measurement time and low cost, which was sufficient for many robotic and industrial automation applications.

  4. A 3D scaffold for ultra-sensitive reduced graphene oxide gas sensors.

    PubMed

    Yun, Yong Ju; Hong, Won G; Choi, Nak-Jin; Park, Hyung Ju; Moon, Seung Eon; Kim, Byung Hoon; Song, Ki-Bong; Jun, Yongseok; Lee, Hyung-Kun

    2014-06-21

    An ultra-sensitive gas sensor based on a reduced graphene oxide nanofiber mat was successfully fabricated using a combination of an electrospinning method and graphene oxide wrapping through an electrostatic self-assembly, followed by a low-temperature chemical reduction. The sensor showed excellent sensitivity to NO2 gas. PMID:24839129

  5. Development of patterned carbon nanotubes on a 3D polymer substrate for the flexible tactile sensor application

    NASA Astrophysics Data System (ADS)

    Hu, Chih-Fan; Su, Wang-Shen; Fang, Weileun

    2011-11-01

    This study reports an improved approach to implement a carbon nanotube (CNT)-based flexible tactile sensor, which is integrated with a flexible print circuit (FPC) connector and is capable of detecting normal and shear forces. The merits of the presented tactile sensor by the integration process are as follows: (1) 3D polymer tactile bump structures are naturally formed by the use of an anisotropically etched silicon mold; (2) planar and 3D distributed CNTs are adopted as piezoresistive sensing elements to enable the detection of shear and normal forces; (3) the processes of patterning CNTs and metal routing can be easily batch fabricated on rigid silicon instead of flexible polymer; (4) robust electrical routing is realized using parylene encapsulation to avoid delamination; (5) patterned CNTs, electrical routing and FPC connector are integrated and transferred to a polydimethylsiloxane (PDMS) substrate by a molding process. In application, the CNT-based flexible tactile sensor and its integration with the FPC connector are implemented. Preliminary tests show the feasibility of detecting both normal and shear forces using the presented flexible sensor.

  6. Development of 3D Force Sensors for Nanopositioning and Nanomeasuring Machine

    PubMed Central

    Tibrewala, Arti; Hofmann, Norbert; Phataralaoha, Anurak; Jäger, Gerd; Büttgenbach, Stephanus

    2009-01-01

    In this contribution, we report on different miniaturized bulk micro machined three-axes piezoresistive force sensors for nanopositioning and nanomeasuring machine (NPMM). Various boss membrane structures, such as one boss full/cross, five boss full/cross and swastika membranes, were used as a basic structure for the force sensors. All designs have 16 p-type diffused piezoresistors on the surface of the membrane. Sensitivities in x, y and z directions are measured. Simulated and measured stiffness ratio in horizontal to vertical direction is measured for each design. Effect of the length of the stylus on H:V stiffness ratio is studied. Minimum and maximum deflection and resonance frequency are measured for all designs. The sensors were placed in a nanopositioning and nanomeasuring machine and one point measurements were performed for all the designs. Lastly the application of the sensor is shown, where dimension of a cube is measured using the sensor. PMID:22412308

  7. Monitoring time course of human whole blood coagulation using a microfluidic dielectric sensor with a 3D capacitive structure.

    PubMed

    Maji, Debnath; Suster, Michael A; Stavrou, Evi; Gurkan, Umut A; Mohseni, Pedram

    2015-08-01

    This paper reports on the design, fabrication, and testing of a microfluidic sensor for dielectric spectroscopy (DS) of human whole blood during coagulation. The sensor employs a three-dimensional (3D), parallel-plate, capacitive sensing structure with a floating electrode integrated into a microfluidic channel. Using an impedance analyzer and after a 5-point calibration, the sensor is shown to measure the real part of complex relative dielectric permittivity of human whole blood in a frequency range of 10kHz to 100MHz. The temporal variation of dielectric permittivity at 1MHz for human whole blood from three different healthy donors shows a peak in permittivity at ~ 4 to 5 minutes, which also corresponds to the onset of CaCl2-initiated coagulation of the blood sample verified visually. PMID:26737635

  8. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  9. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  10. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I.-Ming

    2014-10-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  11. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    NASA Astrophysics Data System (ADS)

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2016-03-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrate an approach for performance enhancement that exploits the high level of similitude and redundancy contained on the multidimensional information measured by distributed fibre sensors. Exploiting conventional image and video processing, an unprecedented boost in signal-to-noise ratio and measurement contrast is experimentally demonstrated. The method can be applied to any white-noise-limited distributed fibre sensor and can remarkably provide a 100-fold improvement in the sensor performance with no hardware modification.

  12. A 3D scaffold for ultra-sensitive reduced graphene oxide gas sensors

    NASA Astrophysics Data System (ADS)

    Yun, Yong Ju; Hong, Won G.; Choi, Nak-Jin; Park, Hyung Ju; Moon, Seung Eon; Kim, Byung Hoon; Song, Ki-Bong; Jun, Yongseok; Lee, Hyung-Kun

    2014-05-01

    An ultra-sensitive gas sensor based on a reduced graphene oxide nanofiber mat was successfully fabricated using a combination of an electrospinning method and graphene oxide wrapping through an electrostatic self-assembly, followed by a low-temperature chemical reduction. The sensor showed excellent sensitivity to NO2 gas.An ultra-sensitive gas sensor based on a reduced graphene oxide nanofiber mat was successfully fabricated using a combination of an electrospinning method and graphene oxide wrapping through an electrostatic self-assembly, followed by a low-temperature chemical reduction. The sensor showed excellent sensitivity to NO2 gas. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr00332b

  13. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    PubMed Central

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2016-01-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrate an approach for performance enhancement that exploits the high level of similitude and redundancy contained on the multidimensional information measured by distributed fibre sensors. Exploiting conventional image and video processing, an unprecedented boost in signal-to-noise ratio and measurement contrast is experimentally demonstrated. The method can be applied to any white-noise-limited distributed fibre sensor and can remarkably provide a 100-fold improvement in the sensor performance with no hardware modification. PMID:26927698

  14. 3D-calibration of three- and four-sensor hot-film probes based on collocated sonic using neural networks

    NASA Astrophysics Data System (ADS)

    Kit, Eliezer; Liberzon, Dan

    2016-09-01

    High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23–41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23–41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104–10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of

  15. DVE flight test results of a sensor enhanced 3D conformal pilot support system

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick

    2015-06-01

    The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.

  16. Pre-Processing of Point-Data from Contact and Optical 3D Digitization Sensors

    PubMed Central

    Budak, Igor; Vukelić, Djordje; Bračun, Drago; Hodolič, Janko; Soković, Mirko

    2012-01-01

    Contemporary 3D digitization systems employed by reverse engineering (RE) feature ever-growing scanning speeds with the ability to generate large quantity of points in a unit of time. Although advantageous for the quality and efficiency of RE modelling, the huge number of point datas can turn into a serious practical problem, later on, when the CAD model is generated. In addition, 3D digitization processes are very often plagued by measuring errors, which can be attributed to the very nature of measuring systems, various characteristics of the digitized objects and subjective errors by the operator, which also contribute to problems in the CAD model generation process. This paper presents an integral system for the pre-processing of point data, i.e., filtering, smoothing and reduction, based on a cross-sectional RE approach. In the course of the proposed system development, major emphasis was placed on the module for point data reduction, which was designed according to a novel approach with integrated deviation analysis and fuzzy logic reasoning. The developed system was verified through its application on three case studies, on point data from objects of versatile geometries obtained by contact and laser 3D digitization systems. The obtained results demonstrate the effectiveness of the system. PMID:22368513

  17. Optical fiber sensor system for oil contamination measurement based on 3D fluorescence spectrum parameterization

    NASA Astrophysics Data System (ADS)

    Shang, Liping; Shi, Jinshan

    2000-10-01

    In recent years oil contamination in water is more serious and destroys the mode of life and relation to water body environments. Excitation fluorescence method is one of the main approaches to monitor oil contamination on line. But average intensity of oil fluorescence only indicates its density, not indicates the type of contamination oil. Two-dimensional fluorescence spectrum is more difficult to determine the kind of oil, because the different oil has fluorescence spectrum overlapping to a great extent. In this paper, the 3D fluorescence spectrum parameterization is introduced. It can extract several characteristic parameters to measure the kid of oil to be measured. A prototype of optical fiber 3D fluorescence spectrum meter we developed carries out the identification of different oil types, such as crude oil, diesel oil and kerosene. The experiment arrangement conceived to measure pulse xenon lamp induced of oil component in water. The experiment results state clearly that the 3D fluorescence spectrum parameterization and software are successful to measure oil density and identify the type of oil in situ.

  18. Lidar Sensor Performance in Closed-Loop Flight Testing of the Morpheus Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, Vincent E.; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.; Bulyshev, Alexander E.

    2015-01-01

    For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, guide the Morpheus autonomous, rocket-propelled, free-flying test bed to a safe landing on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging flash lidar is a second generation, compact, real-time, air-cooled instrument developed from a number of cutting-edge components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The flash lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision at 1 sigma. The flash lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Doppler Lidar system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The Doppler Lidar's measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter, also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the flash lidar, can provide range along a separate vector. The Laser Altimeter measurements are also

  19. Separating Leaves from Trunks and Branches with Dual-Wavelength Terrestrial Lidar Scanning: Improving Canopy Structure Characterization in 3-D Space

    NASA Astrophysics Data System (ADS)

    Li, Z.; Strahler, A. H.; Schaaf, C.; Howe, G.; Martel, J.; Hewawasam, K.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Paynter, I.; Saenz, E.; Wang, Z.; Yang, X.; Yao, T.; Zhao, F.; Woodcock, C.; Jupp, D.; Schaefer, M.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Leaf area index (LAI) is an important parameter characterizing forest structure, used in models regulating the exchange of carbon, water and energy between the land and the atmosphere. However, optical methods in common use cannot separate leaf area from the area of upper trunks and branches, and thus retrieve only plant area index (PAI), which is adjusted to LAI using an appropriate empirical woody-to-total index. An additional problem is that the angular distributions of leaf normals and normals to woody surfaces are quite different, and thus leafy and woody components project quite different areas with varying zenith angle of view. This effect also causes error in LAI retrieval using optical methods. Full-waveform scans at both the NIR (1064 nm) and SWIR (1548 nm) wavelengths from the new terrestrial Lidar, the Dual-Wavelength Echidna Lidar (DWEL), which pulses in both wavelengths simultaneously, easily separate returns of leaves from trunks and branches in 3-D space. In DWEL scans collected at two different forest sites, Sierra National Forest in June 2013 and Brisbane Karawatha Forest Park in July 2013, the power returned from leaves is similar to power returned from trunks/branches at the NIR wavelength, whereas the power returned from leaves is much lower (only about half as large) at the SWIR wavelength. At the SWIR wavelength, the leaf scattering is strongly attenuated by liquid water absorption. Normalized difference index (NDI) images from the waveform mean intensity at the two wavelengths demonstrate a clear contrast between leaves and trunks/branches. The attached image shows NDI from a part of a scan of an open red fir stand in the Sierra National Forest. Leaves appear light, while other objects are darker.Dual-wavelength point clouds generated from the full waveform data show weaker returns from leaves than from trunks/branches. A simple threshold classification of the NDI value of each scattering point readily separates leaves from trunks and

  20. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles.

    PubMed

    Yilmazoglu, O; Yadav, S; Cicek, D; Schneider, J J

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm(-1)) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ∼11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30-50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  1. Real-time processor for 3-D information extraction from image sequences by a moving area sensor

    NASA Astrophysics Data System (ADS)

    Hattori, Tetsuo; Nakada, Makoto; Kubo, Katsumi

    1990-11-01

    This paper presents a real time image processor for obtaining threedimensional( 3-D) distance information from image sequence caused by a moving area sensor. The processor has been developed for an automated visual inspection robot system (pilot system) with an autonomous vehicle which moves around avoiding obstacles in a power plant and checks whether there are defects or abnormal phenomena such as steam leakage from valves. The processor detects the distance between objects in the input image and the area sensor deciding corresponding points(pixels) between the first input image and the last one by tracing the loci of edges through the sequence of sixteen images. The hardware which plays an important role is two kinds of boards: mapping boards which can transform X-coordinate (horizontal direction) and Y-coordinate (vertical direction) for each horizontal row of images and a regional labelling board which extracts the connected loci of edges through image sequence. This paper also shows the whole processing flow of the distance detection algorithm. Since the processor can continuously process images ( 512x512x8 [pixels*bits per frame] ) at the NTSC video rate it takes about O. 7[sec] to measure the 3D distance by sixteen input images. The error rate of the measurement is maximum 10 percent when the area sensor laterally moves the range of 20 [centimeters] and when the measured scene including complicated background is at a distance of 4 [meters] from

  2. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  3. A Robust MEMS Based Multi-Component Sensor for 3D Borehole Seismic Arrays

    SciTech Connect

    Paulsson Geophysical Services

    2008-03-31

    The objective of this project was to develop, prototype and test a robust multi-component sensor that combines both Fiber Optic and MEMS technology for use in a borehole seismic array. The use such FOMEMS based sensors allows a dramatic increase in the number of sensors that can be deployed simultaneously in a borehole seismic array. Therefore, denser sampling of the seismic wave field can be afforded, which in turn allows us to efficiently and adequately sample P-wave as well as S-wave for high-resolution imaging purposes. Design, packaging and integration of the multi-component sensors and deployment system will target maximum operating temperature of 350-400 F and a maximum pressure of 15000-25000 psi, thus allowing operation under conditions encountered in deep gas reservoirs. This project aimed at using existing pieces of deployment technology as well as MEMS and fiber-optic technology. A sensor design and analysis study has been carried out and a laboratory prototype of an interrogator for a robust borehole seismic array system has been assembled and validated.

  4. Discriminating crop, weeds and soil surface with a terrestrial LIDAR sensor.

    PubMed

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolá, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-01-01

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12-14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying. PMID:24172283

  5. Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor

    PubMed Central

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolà, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-01-01

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying. PMID:24172283

  6. 3D-information fusion from very high resolution satellite sensors

    NASA Astrophysics Data System (ADS)

    Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.

    2015-04-01

    In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.

  7. Use of a terrestrial LIDAR sensor for drift detection in vineyard spraying.

    PubMed

    Gil, Emilio; Llorens, Jordi; Llop, Jordi; Fàbregas, Xavier; Gallart, Montserrat

    2013-01-01

    The use of a scanning Light Detection and Ranging (LIDAR) system to characterize drift during pesticide application is described. The LIDAR system is compared with an ad hoc test bench used to quantify the amount of spray liquid moving beyond the canopy. Two sprayers were used during the field test; a conventional mist blower at two air flow rates (27,507 and 34,959 m3·h(-1)) equipped with two different nozzle types (conventional and air injection) and a multi row sprayer with individually oriented air outlets. A simple model based on a linear function was used to predict spray deposit using LIDAR measurements and to compare with the deposits measured over the test bench. Results showed differences in the effectiveness of the LIDAR sensor depending on the sprayed droplet size (nozzle type) and air intensity. For conventional mist blower and low air flow rate; the sensor detects a greater number of drift drops obtaining a better correlation (r = 0.91; p < 0.01) than for the case of coarse droplets or high air flow rate. In the case of the multi row sprayer; drift deposition in the test bench was very poor. In general; the use of the LIDAR sensor presents an interesting and easy technique to establish the potential drift of a specific spray situation as an adequate alternative for the evaluation of drift potential. PMID:23282583

  8. Use of a Terrestrial LIDAR Sensor for Drift Detection in Vineyard Spraying

    PubMed Central

    Gil, Emilio; Llorens, Jordi; Llop, Jordi; Fàbregas, Xavier; Gallart, Montserrat

    2013-01-01

    The use of a scanning Light Detection and Ranging (LIDAR) system to characterize drift during pesticide application is described. The LIDAR system is compared with an ad hoc test bench used to quantify the amount of spray liquid moving beyond the canopy. Two sprayers were used during the field test; a conventional mist blower at two air flow rates (27,507 and 34,959 m3·h−1) equipped with two different nozzle types (conventional and air injection) and a multi row sprayer with individually oriented air outlets. A simple model based on a linear function was used to predict spray deposit using LIDAR measurements and to compare with the deposits measured over the test bench. Results showed differences in the effectiveness of the LIDAR sensor depending on the sprayed droplet size (nozzle type) and air intensity. For conventional mist blower and low air flow rate; the sensor detects a greater number of drift drops obtaining a better correlation (r = 0.91; p < 0.01) than for the case of coarse droplets or high air flow rate. In the case of the multi row sprayer; drift deposition in the test bench was very poor. In general; the use of the LIDAR sensor presents an interesting and easy technique to establish the potential drift of a specific spray situation as an adequate alternative for the evaluation of drift potential. PMID:23282583

  9. 3D integration technology for sensor application using less than 5μm-pitch gold cone-bump connpdfection

    NASA Astrophysics Data System (ADS)

    Motoyoshi, M.; Miyoshi, T.; Ikebec, M.; Arai, Y.

    2015-03-01

    Three-dimensional (3D) integrated circuit (IC) technology is an effective solution to reduce the manufacturing costs of advanced two-dimensional (2D) large-scale integration (LSI) while ensuring equivalent device performance and functionalities. This technology allows a new device architecture using stacked detector/sensor devices with a small dead sensor area and high-speed operation that facilitates hyper-parallel data processing. In pixel detectors or focal-plane sensor devices, each pixel area must accommodate many transistors without increasing the pixel size. Consequently, many methods to realize 3D-LSI devices have been developed to meet this requirement by focusing on the unit processes of 3D-IC technology, such as through-silicon via formation and electrical and mechanical bonding between tiers of the stack. The bonding process consists of several unit processes such as bump or metal contact formation, chip/wafer alignment, chip/wafer bonding, and underfill formation; many process combinations have been reported. Our research focuses on a versatile bonding technology for silicon LSI, compound semiconductor, and microelectromechanical system devices at temperatures of less than 200oC for heterogeneous integration. A gold (Au) cone bump formed by nanoparticle deposition is one of the promising candidates for this purpose. This paper presents the experimental result of a fabricated prototype with 3-μm-diameter Au cone-bump connections with adhesive injection, and compares it with that of an indium microbump (μ-bump). The resistance of the 3-μm-diameter Au cone bump is approximately 6 Ω. We also investigated the influence of stress caused by the bump junction on the MOS characteristics.

  10. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  11. Development of Lidar Sensor Systems for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierottet, Diego F.; Petway, Larry B.; Vanek, Michael D.

    2010-01-01

    Lidar has been identified by NASA as a key technology for enabling autonomous safe landing of future robotic and crewed lunar landing vehicles. NASA LaRC has been developing three laser/lidar sensor systems under the ALHAT project. The capabilities of these Lidar sensor systems were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard helicopters and a fixed wing aircraft. The airborne tests were performed over Moon-like terrain in the California and Nevada deserts. These tests provided the necessary data for the development of signal processing software, and algorithms for hazard detection and navigation. The tests helped identify technology areas needing improvement and will also help guide future technology advancement activities.

  12. Reducing the influence of direct reflection on return signal detection in a 3D imaging lidar system by rotating the polarizing beam splitter.

    PubMed

    Wang, Chunhui; Lee, Xiaobao; Cui, Tianxiang; Qu, Yang; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-03-01

    The direction rule of the laser beam traveling through a deflected polarizing beam splitter (PBS) cube is derived. It reveals that, due to the influence of end-face reflection of the PBS at the detector side, the emergent beam coming from the incident beam parallels the direction of the original case without rotation, with only a very small translation interval between them. The formula of the translation interval is also given. Meanwhile, the emergent beam from the return signal at the detector side deflects at an angle twice that of the PBS rotation angle. The correctness has been verified by an experiment. The intensity transmittance of the emergent beam when propagating in the PBS is changes very little if the rotation angle is less than 35 deg. In a 3D imaging lidar system, by rotating the PBS cube by an angle, the direction of the return signal optical axis is separated from that of the origin, which can decrease or eliminate the influence of direct reflection caused by the prism end face on target return signal detection. This has been checked by experiment. PMID:26974613

  13. Characterization of the first double-sided 3D radiation sensors fabricated at FBK on 6-inch silicon wafers

    NASA Astrophysics Data System (ADS)

    Sultan, D. M. S.; Mendicino, R.; Boscardin, M.; Ronchin, S.; Zorzi, N.; Dalla Betta, G.-F.

    2015-12-01

    Following 3D pixel sensor production for the ATLAS Insertable B-Layer, Fondazione Bruno Kessler (FBK) fabrication facility has recently been upgraded to process 6-inch wafers. In 2014, a test batch was fabricated to check for possible issues relevant to this upgrade. While maintaining a double-sided fabrication technology, some process modifications have been investigated. We report here on the technology and the design of this batch, and present selected results from the electrical characterization of sensors and test structures. Notably, the breakdown voltage is shown to exceed 200 V before irradiation, much higher than in earlier productions, demonstrating robustness in terms of radiation hardness for forthcoming productions aimed at High Luminosity LHC upgrades.

  14. 3D vision sensor and its algorithm on clone seedlings plant system

    NASA Astrophysics Data System (ADS)

    Hayashi, Jun-ichiro; Hiroyasu, Takehisa; Hojo, Hirotaka; Hata, Seiji; Okada, Hiroshi

    2007-01-01

    Today, vision systems for robots had been widely applied to many important applications. But 3-D vision systems for industrial uses should face to many practical problems. Here, a vision system for bio-production has been introduced. Clone seedlings plants are one of the important applications of biotechnology. Most of the production processes of clone seedlings plants are highly automated, but the transplanting process of the small seedlings plants cannot be automated because the shape of small seedlings plants are not stable and in order to handle the seedlings plants it is required to observe the shapes of the small seedlings plants. In this research, a robot vision system has been introduced for the transplanting process in a plant factory.

  15. A method of improving the dynamic response of 3D force/torque sensors

    NASA Astrophysics Data System (ADS)

    Osypiuk, Rafał; Piskorowski, Jacek; Kubus, Daniel

    2016-02-01

    In the paper attention is drawn to adverse dynamic properties of filters implemented in commercial measurement systems, force/torque sensors, which are increasingly used in industrial robotics. To remedy the problem, it has been proposed to employ a time-variant filter with appropriately modulated parameters, owing to which it is possible to suppress the amplitude of the transient response and, at the same time, to increase the pulsation of damped oscillations; this results in the improvement of dynamic properties in terms of reducing the duration of transients. This property plays a key role in force control and in the fundamental problem of the robot establishing contact with rigid environment. The parametric filters have been verified experimentally and compared with filters available for force/torque sensors manufactured by JR3. The obtained results clearly indicate the advantages of the proposed solution, which may be an interesting alternative to the classic methods of filtration.

  16. 3D gait assessment in young and elderly subjects using foot-worn inertial sensors.

    PubMed

    Mariani, Benoit; Hoskovec, Constanze; Rochat, Stephane; Büla, Christophe; Penders, Julien; Aminian, Kamiar

    2010-11-16

    This study describes the validation of a new wearable system for assessment of 3D spatial parameters of gait. The new method is based on the detection of temporal parameters, coupled to optimized fusion and de-drifted integration of inertial signals. Composed of two wirelesses inertial modules attached on feet, the system provides stride length, stride velocity, foot clearance, and turning angle parameters at each gait cycle, based on the computation of 3D foot kinematics. Accuracy and precision of the proposed system were compared to an optical motion capture system as reference. Its repeatability across measurements (test-retest reliability) was also evaluated. Measurements were performed in 10 young (mean age 26.1±2.8 years) and 10 elderly volunteers (mean age 71.6±4.6 years) who were asked to perform U-shaped and 8-shaped walking trials, and then a 6-min walking test (6MWT). A total of 974 gait cycles were used to compare gait parameters with the reference system. Mean accuracy±precision was 1.5±6.8cm for stride length, 1.4±5.6cm/s for stride velocity, 1.9±2.0cm for foot clearance, and 1.6±6.1° for turning angle. Difference in gait performance was observed between young and elderly volunteers during the 6MWT particularly in foot clearance. The proposed method allows to analyze various aspects of gait, including turns, gait initiation and termination, or inter-cycle variability. The system is lightweight, easy to wear and use, and suitable for clinical application requiring objective evaluation of gait outside of the lab environment. PMID:20656291

  17. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites.

    PubMed

    Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin

    2015-01-01

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168

  18. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    PubMed Central

    Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin

    2015-01-01

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168

  19. An analogue contact probe using a compact 3D optical sensor for micro/nano coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Fan, Kuang-Chao; Miao, Jin-Wei; Huang, Qiang-Xian; Tao, Sheng; Gong, Er-min

    2014-09-01

    This paper presents a new analogue contact probe based on a compact 3D optical sensor with high precision. The sensor comprises an autocollimator and a polarizing Michelson interferometer, which can detect two angles and one displacement of the plane mirror at the same time. In this probe system, a tungsten stylus with a ruby tip-ball is attached to a floating plate, which is supported by four V-shape leaf springs fixed to the outer case. When a contact force is applied to the tip, the leaf springs will experience elastic deformation and the plane mirror mounted on the floating plate will be displaced. The force-motion characteristics of this probe were investigated and optimum parameters were obtained with the constraint of allowable physical size of the probe. Simulation results show that the probe is uniform in 3D and its contacting force gradient is within 1 mN µm - 1. Experimental results indicate that the probe has 1 nm resolution,  ± 10 µm measuring range in X - Y plane, 10 µm measuring range in Z direction and within 30 nm measuring standard deviation. The feasibility of the probe has been preliminarily verified by testing the flatness and step height of high precision gauge blocks.

  20. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  1. Retrieval of Vegetation Structure and Carbon Balance Parameters Using Ground-Based Lidar and Scaling to Airborne and Spaceborne Lidar Sensors

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Ni-Meister, W.; Woodcock, C. E.; Li, X.; Jupp, D. L.; Culvenor, D.

    2006-12-01

    This research uses a ground-based, upward hemispherical scanning lidar to retrieve forest canopy structural information, including tree height, mean tree diameter, basal area, stem count density, crown diameter, woody biomass, and green biomass. These parameters are then linked to airborne and spaceborne lidars to provide large-area mapping of structural and biomass parameters. The terrestrial lidar instrument, Echidna(TM), developed by CSIRO Australia, allows rapid acquisition of vegetation structure data that can be readily integrated with downward-looking airborne lidar, such as LVIS (Laser Vegetation Imaging Sensor), and spaceborne lidar, such as GLAS (Geoscience Laser Altimeter System) on ICESat. Lidar waveforms and vegetation structure are linked for these three sensors through the hybrid geometric-optical radiative-transfer (GORT) model, which uses basic vegetation structure parameters and principles of geometric optics, coupled with radiative transfer theory, to model scattering and absorption of light by collections of individual plant crowns. Use of a common model for lidar waveforms at ground, airborne, and spaceborne levels facilitates integration and scaling of the data to provide large-area maps and inventories of vegetation structure and carbon stocks. Our research plan includes acquisition of Echidna(TM) under-canopy hemispherical lidar scans at North American test sites where LVIS and GLAS data have been or are being acquired; analysis and modeling of spatially coincident lidar waveforms acquired by the three sensor systems; linking of the three data sources using the GORT model; and mapping of vegetation structure and carbon-balance parameters at LVIS and GLAS resolutions based on Echidna(TM) measurements.

  2. A Compact 3D Omnidirectional Range Sensor of High Resolution for Robust Reconstruction of Environments

    PubMed Central

    Marani, Roberto; Renò, Vito; Nitti, Massimiliano; D'Orazio, Tiziana; Stella, Ettore

    2015-01-01

    In this paper, an accurate range sensor for the three-dimensional reconstruction of environments is designed and developed. Following the principles of laser profilometry, the device exploits a set of optical transmitters able to project a laser line on the environment. A high-resolution and high-frame-rate camera assisted by a telecentric lens collects the laser light reflected by a parabolic mirror, whose shape is designed ad hoc to achieve a maximum measurement error of 10 mm when the target is placed 3 m away from the laser source. Measurements are derived by means of an analytical model, whose parameters are estimated during a preliminary calibration phase. Geometrical parameters, analytical modeling and image processing steps are validated through several experiments, which indicate the capability of the proposed device to recover the shape of a target with high accuracy. Experimental measurements show Gaussian statistics, having standard deviation of 1.74 mm within the measurable range. Results prove that the presented range sensor is a good candidate for environmental inspections and measurements. PMID:25621605

  3. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    PubMed

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy. PMID:26729117

  4. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters

    PubMed Central

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments’ performance and survey accuracy. PMID:26729117

  5. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility

    PubMed Central

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-01-01

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373

  6. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  7. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility.

    PubMed

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-01-01

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373

  8. A robust method to detect zero velocity for improved 3D personal navigation using inertial sensors.

    PubMed

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  9. Non-Enzymatic Glucose Sensor Based on 3D Graphene Oxide Hydrogel Crosslinked by Various Diamines.

    PubMed

    Hoa, Le Thuy; Hur, Seung Hyun

    2015-11-01

    The non-enzymatic glucose sensor was fabricated by well-controlled and chemically crosslinked graphene oxide hydrogels (GOHs). By using various diamines such as ethylenediamine (EDA), p-phenylene diamine (pPDA) and o-phenylene diamine (oPDA) that have different amine to amine distance, we can control the structures of GOHs such as surface area and pore volume. The pPDA-GOH fabricated by pPDA exhibited the largest surface area and pore volume due to its longest amine to amine distance, which resulted in highest sensitivity in glucose and other monosaccharide sensing such as fructose (C6H12O6), galactose (C6H12O6) and sucrose (C12H22O11). It also showed fast and wide range glucose sensing ability in the amperometric test, and an excellent selectivity toward other interference species such as an Ascorbic acid. PMID:26726578

  10. a New Automatic System Calibration of Multi-Cameras and LIDAR Sensors

    NASA Astrophysics Data System (ADS)

    Hassanein, M.; Moussa, A.; El-Sheimy, N.

    2016-06-01

    In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated calibration without

  11. Ultrasonic and LIDAR sensors for electronic canopy characterization in vineyards: advances to improve pesticide application methods.

    PubMed

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Escolà, Alexandre

    2011-01-01

    Canopy characterization is a key factor to improve pesticide application methods in tree crops and vineyards. Development of quick, easy and efficient methods to determine the fundamental parameters used to characterize canopy structure is thus an important need. In this research the use of ultrasonic and LIDAR sensors have been compared with the traditional manual and destructive canopy measurement procedure. For both methods the values of key parameters such as crop height, crop width, crop volume or leaf area have been compared. Obtained results indicate that an ultrasonic sensor is an appropriate tool to determine the average canopy characteristics, while a LIDAR sensor provides more accuracy and detailed information about the canopy. Good correlations have been obtained between crop volume (C(VU)) values measured with ultrasonic sensors and leaf area index, LAI (R(2) = 0.51). A good correlation has also been obtained between the canopy volume measured with ultrasonic and LIDAR sensors (R(2) = 0.52). Laser measurements of crop height (C(HL)) allow one to accurately predict the canopy volume. The proposed new technologies seems very appropriate as complementary tools to improve the efficiency of pesticide applications, although further improvements are still needed. PMID:22319405

  12. Ultrasonic and LIDAR Sensors for Electronic Canopy Characterization in Vineyards: Advances to Improve Pesticide Application Methods

    PubMed Central

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Escolà, Alexandre

    2011-01-01

    Canopy characterization is a key factor to improve pesticide application methods in tree crops and vineyards. Development of quick, easy and efficient methods to determine the fundamental parameters used to characterize canopy structure is thus an important need. In this research the use of ultrasonic and LIDAR sensors have been compared with the traditional manual and destructive canopy measurement procedure. For both methods the values of key parameters such as crop height, crop width, crop volume or leaf area have been compared. Obtained results indicate that an ultrasonic sensor is an appropriate tool to determine the average canopy characteristics, while a LIDAR sensor provides more accuracy and detailed information about the canopy. Good correlations have been obtained between crop volume (CVU) values measured with ultrasonic sensors and leaf area index, LAI (R2 = 0.51). A good correlation has also been obtained between the canopy volume measured with ultrasonic and LIDAR sensors (R2 = 0.52). Laser measurements of crop height (CHL) allow one to accurately predict the canopy volume. The proposed new technologies seems very appropriate as complementary tools to improve the efficiency of pesticide applications, although further improvements are still needed. PMID:22319405

  13. Direct Growth of Graphene Films on 3D Grating Structural Quartz Substrates for High-Performance Pressure-Sensitive Sensors.

    PubMed

    Song, Xuefen; Sun, Tai; Yang, Jun; Yu, Leyong; Wei, Dacheng; Fang, Liang; Lu, Bin; Du, Chunlei; Wei, Dapeng

    2016-07-01

    Conformal graphene films have directly been synthesized on the surface of grating microstructured quartz substrates by a simple chemical vapor deposition process. The wonderful conformality and relatively high quality of the as-prepared graphene on the three-dimensional substrate have been verified by scanning electron microscopy and Raman spectra. This conformal graphene film possesses excellent electrical and optical properties with a sheet resistance of <2000 Ω·sq(-1) and a transmittance of >80% (at 550 nm), which can be attached with a flat graphene film on a poly(dimethylsiloxane) substrate, and then could work as a pressure-sensitive sensor. This device possesses a high-pressure sensitivity of -6.524 kPa(-1) in a low-pressure range of 0-200 Pa. Meanwhile, this pressure-sensitive sensor exhibits super-reliability (≥5000 cycles) and an ultrafast response time (≤4 ms). Owing to these features, this pressure-sensitive sensor based on 3D conformal graphene is adequately introduced to test wind pressure, expressing higher accuracy and a lower background noise level than a market anemometer. PMID:27269362

  14. Multi-sensor super-resolution for hybrid range imaging with application to 3-D endoscopy and open surgery.

    PubMed

    Köhler, Thomas; Haase, Sven; Bauer, Sebastian; Wasza, Jakob; Kilgus, Thomas; Maier-Hein, Lena; Stock, Christian; Hornegger, Joachim; Feußner, Hubertus

    2015-08-01

    In this paper, we propose a multi-sensor super-resolution framework for hybrid imaging to super-resolve data from one modality by taking advantage of additional guidance images of a complementary modality. This concept is applied to hybrid 3-D range imaging in image-guided surgery, where high-quality photometric data is exploited to enhance range images of low spatial resolution. We formulate super-resolution based on the maximum a-posteriori (MAP) principle and reconstruct high-resolution range data from multiple low-resolution frames and complementary photometric information. Robust motion estimation as required for super-resolution is performed on photometric data to derive displacement fields of subpixel accuracy for the associated range images. For improved reconstruction of depth discontinuities, a novel adaptive regularizer exploiting correlations between both modalities is embedded to MAP estimation. We evaluated our method on synthetic data as well as ex-vivo images in open surgery and endoscopy. The proposed multi-sensor framework improves the peak signal-to-noise ratio by 2 dB and structural similarity by 0.03 on average compared to conventional single-sensor approaches. In ex-vivo experiments on porcine organs, our method achieves substantial improvements in terms of depth discontinuity reconstruction. PMID:26201876

  15. Underwater monitoring experiment using hyperspectral sensor, LiDAR and high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Yang, Chan-Su; Kim, Sun-Hwa

    2014-10-01

    In general, hyper-spectral sensor, LiDAR and high spatial resolution satellite imagery for underwater monitoring are dependent on water clarity or water transparency that can be measured using a Secchi disk or satellite ocean color data. Optical properties in the sea waters of South Korea are influenced mainly by a strong tide and oceanic currents, diurnal, daily and seasonal variations of water transparency. The satellite-based Secchi depth (ZSD) analysis showed the applicability of hyper-spectral sensor, LiDAR and optical satellite, determined by the location connected with the local distribution of Case 1 and 2 waters. The southeast coastal areas of Jeju Island are selected as test sites for a combined underwater experiment, because those areas represent Case 1 water. Study area is a small port (<15m) in the southeast area of the island and linear underwater target used by sewage pipe is located in this area. Our experiments are as follows: 1. atmospheric and sun-glint correction methods to improve the underwater monitoring ability; 2. intercomparison of water depths obtained from three different sensors. Three sensors used here are the CASI-1500 (Wide-Array Airborne Hyperspectral VNIR Imager (0.38-1.05 microns), the Coastal Zone Mapping and Imaging Lidar (CZMIL) and Korean Multi-purpose Satellite-3 (KOMPSAT-3) with 2.8 meter multi-spectral resolution. The experimental results were affected by water clarity and surface condition, and the bathymetric results of three sensors show some differences caused by sensor-itself, bathymetric algorithm and tide level. It is shown that CASI-1500 was applicable for bathymetry and underwater target detection in this area, but KOMPSAT-3 should be improved for Case 1 water. Although this experiment was designed to compare underwater monitoring ability of LIDAR, CASI-1500, KOMPSAT-3 data, this paper was based on initial results and suggested only results about the bathymetry and underwater target detection.

  16. On-machine measurement of the grinding wheels' 3D surface topography using a laser displacement sensor

    NASA Astrophysics Data System (ADS)

    Pan, Yongcheng; Zhao, Qingliang; Guo, Bing

    2014-08-01

    A method of non-contact, on-machine measurement of three dimensional surface topography of grinding wheels' whole surface was developed in this paper, focusing on an electroplated coarse-grained diamond grinding wheel. The measuring system consists of a Keyence laser displacement sensor, a Keyence controller and a NI PCI-6132 data acquisition card. A resolution of 0.1μm in vertical direction and 8μm in horizontal direction could be achieved. After processing the data by LabVIEW and MATLAB, the 3D topography of the grinding wheel's whole surface could be reconstructed. When comparing the reconstructed 3D topography of the grinding wheel's marked area to its real topography captured by a high-depth-field optical digital microscope (HDF-ODM) and scanning electron microscope (SEM), they were very similar to each other, proving that this method is accurate and effective. By a subsequent data processing, the topography of every grain could be extracted and then the active grain number, the active grain volume and the active grain's bearing ration could be calculated. These three parameters could serve as the criterion to evaluate the grinding performance of coarse-grained diamond grinding wheels. Then the performance of the grinding wheel could be evaluated on-machine accurately and quantitatively.

  17. Digital holographic interferometer using simultaneously three lasers and a single monochrome sensor for 3D displacement measurements.

    PubMed

    Saucedo-A, Tonatiuh; De la Torre-Ibarra, M H; Santoyo, F Mendoza; Moreno, Ivan

    2010-09-13

    The use of digital holographic interferometry for 3D measurements using simultaneously three illumination directions was demonstrated by Saucedo et al. (Optics Express 14(4) 2006). The technique records two consecutive images where each one contains three holograms in it, e.g., one before the deformation and one after the deformation. A short coherence length laser must be used to obtain the simultaneous 3D information from the same laser source. In this manuscript we present an extension of this technique now illuminating simultaneously with three different lasers at 458, 532 and 633 nm, and using only one high resolution monochrome CMOS sensor. This new configuration gives the opportunity to use long coherence length lasers allowing the measurement of large object areas. A series of digital holographic interferograms are recorded and the information corresponding to each laser is isolated in the Fourier spectral domain where the corresponding phase difference is calculated. Experimental results render the orthogonal displacement components u, v and w during a simple load deformation. PMID:20940878

  18. 3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot.

    PubMed

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly "coupled" as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a "zero" or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  19. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    PubMed Central

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  20. FLASH LIDAR Based Relative Navigation

    NASA Technical Reports Server (NTRS)

    Brazzel, Jack; Clark, Fred; Milenkovic, Zoran

    2014-01-01

    Relative navigation remains the most challenging part of spacecraft rendezvous and docking. In recent years, flash LIDARs, have been increasingly selected as the go-to sensors for proximity operations and docking. Flash LIDARS are generally lighter and require less power that scanning Lidars. Flash LIDARs do not have moving parts, and they are capable of tracking multiple targets as well as generating a 3D map of a given target. However, there are some significant drawbacks of Flash Lidars that must be resolved if their use is to be of long-term significance. Overcoming the challenges of Flash LIDARs for navigation-namely, low technology readiness level, lack of historical performance data, target identification, existence of false positives, and performance of vision processing algorithms as intermediaries between the raw sensor data and the Kalman filter-requires a world-class testing facility, such as the Lockheed Martin Space Operations Simulation Center (SOSC). Ground-based testing is a critical step for maturing the next-generation flash LIDAR-based spacecraft relative navigation. This paper will focus on the tests of an integrated relative navigation system conducted at the SOSC in January 2014. The intent of the tests was to characterize and then improve the performance of relative navigation, while addressing many of the flash LIDAR challenges mentioned above. A section on navigation performance and future recommendation completes the discussion.

  1. Method for optimal sensor deployment on 3D terrains utilizing a steady state genetic algorithm with a guided walk mutation operator based on the wavelet transform.

    PubMed

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  2. Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform

    PubMed Central

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K.

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  3. Triboelectric nanogenerator built on suspended 3D spiral structure as vibration and positioning sensor and wave energy harvester.

    PubMed

    Hu, Youfan; Yang, Jin; Jing, Qingshen; Niu, Simiao; Wu, Wenzhuo; Wang, Zhong Lin

    2013-11-26

    An unstable mechanical structure that can self-balance when perturbed is a superior choice for vibration energy harvesting and vibration detection. In this work, a suspended 3D spiral structure is integrated with a triboelectric nanogenerator (TENG) for energy harvesting and sensor applications. The newly designed vertical contact-separation mode TENG has a wide working bandwidth of 30 Hz in low-frequency range with a maximum output power density of 2.76 W/m(2) on a load of 6 MΩ. The position of an in-plane vibration source was identified by placing TENGs at multiple positions as multichannel, self-powered active sensors, and the location of the vibration source was determined with an error less than 6%. The magnitude of the vibration is also measured by the output voltage and current signal of the TENG. By integrating the TENG inside a buoy ball, wave energy harvesting at water surface has been demonstrated and used for lighting illumination light, which shows great potential applications in marine science and environmental/infrastructure monitoring. PMID:24168315

  4. Definition of the fundamentals for the automatic generation of digitalization processes with a 3D laser sensor

    NASA Astrophysics Data System (ADS)

    Davillerd, Stephane; Sidot, Benoit; Bernard, Alain; Ris, Gabriel

    1998-12-01

    This paper introduces the first results of a research work carried out on the automation of digitizing process of complex part using a precision 3D laser senor. Indeed, most of the operations are generally still manual to perform digitization. In fact, redundancies, lacks or forgettings in point acquisition are possible. Moreover, digitalization time of a part, i.e. immobilization of the machine, is thus not optimized overall. After introducing the context in which evolves the reverse engineering, we quickly present non-contact sensors and machines usable to digitalize a part. Considered environment of digitization is also modeled, but in a general way in order to preserve an upgrading capability to the system. Machine and sensor actually used are then presented and their integration exposed. Current process of digitization is then detailed, after what a critical analysis from the considered point of view is carried out and some solutions are suggested. The paper concludes on the laid down prospects and the next projected developments.

  5. Capturing 3D resistivity of semi-arid karstic subsurface in varying moisture conditions using a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Barnhart, K.; Oden, C. P.

    2012-12-01

    The dissolution of soluble bedrock results in surface and subterranean karst channels, which comprise 7-10% of the dry earth's surface. Karst serves as a preferential conduit to focus surface and subsurface water but it is difficult to exploit as a water resource or protect from pollution because of irregular structure and nonlinear hydrodynamic behavior. Geophysical characterization of karst commonly employs resistivity and seismic methods, but difficulties arise due to low resistivity contrast in arid environments and insufficient resolution of complex heterogeneous structures. To help reduce these difficulties, we employ a state-of-the-art wireless geophysical sensor array, which combines low-power radio telemetry and solar energy harvesting to enable long-term in-situ monitoring. The wireless aspect removes topological constraints common with standard wired resistivity equipment, which facilitates better coverage and/or sensor density to help improve aspect ratio and resolution. Continuous in-situ deployment allows data to be recorded according to nature's time scale; measurements are made during infrequent precipitation events which can increase resistivity contrast. The array is coordinated by a smart wireless bridge that continuously monitors local soil moisture content to detect when precipitation occurs, schedules resistivity surveys, and periodically relays data to the cloud via 3G cellular service. Traditional 2/3D gravity and seismic reflection surveys have also been conducted to clarify and corroborate results.

  6. Relative Navigation Light Detection and Ranging (LIDAR) Sensor Development Test Objective (DTO) Performance Verification

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2013-01-01

    The NASA Engineering and Safety Center (NESC) received a request from the NASA Associate Administrator (AA) for Human Exploration and Operations Mission Directorate (HEOMD), to quantitatively evaluate the individual performance of three light detection and ranging (LIDAR) rendezvous sensors flown as orbiter's development test objective on Space Transportation System (STS)-127, STS-133, STS-134, and STS-135. This document contains the outcome of the NESC assessment.

  7. Doppler Lidar Sensor for Precision Navigation in GPS-Deprived Environment

    NASA Technical Reports Server (NTRS)

    Amzajerdian, F.; Pierrottet, D. F.; Hines, G. D.; Hines, G. D.; Petway, L. B.; Barnes, B. W.

    2013-01-01

    Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle's Inertial Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration onboard a rocket-powered terrestrial free-flyer vehicle.

  8. Doppler lidar sensor for precision navigation in GPS-deprived environment

    NASA Astrophysics Data System (ADS)

    Amzajerdian, F.; Pierrottet, D. F.; Hines, G. D.; Petway, L. B.; Barnes, B. W.

    2013-05-01

    Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle's Inertial Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration onboard a rocket-powered terrestrial free-flyer vehicle.

  9. Fusion of Multi-Angle Imaging Spectrometer and LIDAR Data for Forest Structural Parameter Retrieval Using 3D Radiative Transfer Modeling

    NASA Astrophysics Data System (ADS)

    Rubio, J.; Sun, G.; Koetz, B.; Ranson, K. J.; Kimes, D.; Gastellu-Etchegorry, J.

    2008-12-01

    The potential of combined multi-angle/multi-spectral optical imagery and LIDAR waveform data to retrieve structural parameters on forest is explored. Our approach relies on two physically based radiative transfer models (RTM), the Discrete Anisotropic Radiative Transfer (DART) for the generation of the BRF images and Sun and Ranson's LIDAR waveform model for the large footprint LIDAR data. These RTM are based on the same basic physical principles and share common inputs parameters. We use the Zelig forest growth model to provide a synthetic but realistic data set to the two RTM. The forest canopy biophysical variables that are being investigated include the maximal tree height, fractional cover, LAI and vertical crown extension. We assess the inversion of forest structural parameters when considering each model separately, then we investigate the accuracy of a coupled inversion. Keywords: Forest, Radiative Transfer Model, Inversion, Fusion, Multi-Angle, LAI, Fractional cover, Tree height, Canopy structure, Biomass, LIDAR, Forest growth model

  10. Hybrid 3D structures of ZnO nanoflowers and PdO nanoparticles as a highly selective methanol sensor.

    PubMed

    Acharyya, D; Huang, K Y; Chattopadhyay, P P; Ho, M S; Fecht, H-J; Bhattacharyya, P

    2016-05-10

    The present study concerns the enhancement of methanol selectivity of three dimensional (3D) nanoflowers (NFs) of ZnO by dispersing nickel oxide (NiO) and palladium oxide (PdO) nanoparticles on the surface of the nanoflowers to form localized hybrid nano-junctions. The nanoflowers were fabricated through a liquid phase deposition technique and the modification was achieved by addition of NiCl and PdCl2 solutions. In addition to the detailed structural (like X-ray diffraction (XRD), electron dispersive spectroscopy (EDS), X-ray mapping, XPS) and morphological characterization (by field emission scanning electron microscopy (FESEM)), the existence of different defect states (viz. oxygen vacancy) was also confirmed by photoluminescence (PL) spectroscopy. The sensing properties of the pristine and metal oxide nanoparticle (NiO/PdO)-ZnO NF hybrid sensor structures, towards different alcohol vapors (methanol, ethanol, 2-propanol) were investigated in the concentration range of 0.5-700 ppm at 100-350 °C. Methanol selectivity study against other interfering species, viz. ethanol, 2-propanol, acetone, benzene, xylene and toluene was also investigated. It was found that the PdO-ZnO NF hybrid system offered enhanced selectivity towards methanol at low temperature (150 °C) compared to the NiO-ZnO NF and pristine ZnO NF counterparts. The underlying mechanism for such improvement has been discussed with respective energy band diagram and preferential dissociation of target species on such 3D hybrid structures. The corresponding improvement in transient characteristics has also been co-related with the proposed model. PMID:27048794

  11. Using Arduinos and 3D-printers to Build Research-grade Weather Stations and Environmental Sensors

    NASA Astrophysics Data System (ADS)

    Ham, J. M.

    2013-12-01

    Many plant, soil, and surface-boundary-layer processes in the geosphere are governed by the microclimate at the land-air interface. Environmental monitoring is needed at smaller scales and higher frequencies than provided by existing weather monitoring networks. The objective of this project was to design, prototype, and test a research-grade weather station that is based on open-source hardware/software and off-the-shelf components. The idea is that anyone could make these systems with only elementary skills in fabrication and electronics. The first prototypes included measurements of air temperature, humidity, pressure, global irradiance, wind speed, and wind direction. The best approach for measuring precipitation is still being investigated. The data acquisition system was deigned around the Arduino microcontroller and included an LCD-based user interface, SD card data storage, and solar power. Sensors were sampled at 5 s intervals and means, standard deviations, and maximum/minimums were stored at user-defined intervals (5, 30, or 60 min). Several of the sensor components were printed in plastic using a hobby-grade 3D printer (e.g., RepRap Project). Both passive and aspirated radiation shields for measuring air temperature were printed in white Acrylonitrile Butadiene Styrene (ABS). A housing for measuring solar irradiance using a photodiode-based pyranometer was printed in opaque ABS. The prototype weather station was co-deployed with commercial research-grade instruments at an agriculture research unit near Fort Collins, Colorado, USA. Excellent agreement was found between Arduino-based system and commercial weather instruments. The technology was also used to support air quality research and automated air sampling. The next step is to incorporate remote access and station-to-station networking using Wi-Fi, cellular phone, and radio communications (e.g., Xbee).

  12. Navigation Doppler Lidar Sensor for Precision Altitude and Vector Velocity Measurements Flight Test Results

    NASA Technical Reports Server (NTRS)

    Pierrottet, Diego F.; Lockhard, George; Amzajerdian, Farzin; Petway, Larry B.; Barnes, Bruce; Hines, Glenn D.

    2011-01-01

    An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over vegetation free terrain. The sensor was one of several sensors tested in this field test by NASA?s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.

  13. Navigation Doppler lidar sensor for precision altitude and vector velocity measurements: flight test results

    NASA Astrophysics Data System (ADS)

    Pierrottet, Diego; Amzajerdian, Farzin; Petway, Larry; Barnes, Bruce; Lockard, George; Hines, Glenn

    2011-06-01

    An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high-resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over various terrains. The sensor was one of several sensors tested in this field test by NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.

  14. Validity, Reliability, and Sensitivity of a 3D Vision Sensor-based Upper Extremity Reachable Workspace Evaluation in Neuromuscular Diseases

    PubMed Central

    Han, Jay J.; Kurillo, Gregorij; Abresch, R. Ted; Nicorici, Alina; Bajcsy, Ruzena

    2013-01-01

    Introduction: One of the major challenges in the neuromuscular field has been lack of upper extremity outcome measures that can be useful for clinical therapeutic efficacy studies. Using vision-based sensor system and customized software, 3-dimensional (3D) upper extremity motion analysis can reconstruct a reachable workspace as a valid, reliable and sensitive outcome measure in various neuromuscular conditions where proximal upper extremity range of motion and function is impaired. Methods: Using a stereo-camera sensor system, 3D reachable workspace envelope surface area normalized to an individual’s arm length (relative surface area: RSA) to allow comparison between subjects was determined for 20 healthy controls and 9 individuals with varying degrees of upper extremity dysfunction due to neuromuscular conditions. All study subjects were classified based on Brooke upper extremity function scale. Right and left upper extremity reachable workspaces were determined based on three repeated measures. The RSAs for each frontal hemi-sphere quadrant and total reachable workspaces were determined with and without loading condition (500 gram wrist weight). Data were analyzed for assessment of the developed system and validity, reliability, and sensitivity to change of the reachable workspace outcome. Results: The mean total RSAs of the reachable workspace for the healthy controls and individuals with NMD were significantly different (0.586 ± 0.085 and 0.299 ± 0.198 respectively; p<0.001). All quadrant RSAs were reduced for individuals with NMDs compared to the healthy controls and these reductions correlated with reduced upper limb function as measured by Brooke grade. The upper quadrants of reachable workspace (above the shoulder level) demonstrated greatest reductions in RSA among subjects with progressive severity in upper extremity impairment. Evaluation of the developed outcomes system with the Bland-Altman method demonstrated narrow 95% limits of agreement (LOA

  15. Compact Optical Fiber 3D Shape Sensor Based on a Pair of Orthogonal Tilted Fiber Bragg Gratings.

    PubMed

    Feng, Dingyi; Zhou, Wenjun; Qiao, Xueguang; Albert, Jacques

    2015-01-01

    In this work, a compact fiber-optic 3D shape sensor consisting of two serially connected 2° tilted fiber Bragg gratings (TFBGs) is proposed, where the orientations of the grating planes of the two TFBGs are orthogonal. The measurement of the reflective transmission spectrum from the pair of TFBGs was implemented by Fresnel reflection of the cleaved fiber end. The two groups of cladding mode resonances in the reflection spectrum respond differentially to bending, which allows for the unique determination of the magnitude and orientation of the bend plane (i.e. with a ± 180 degree uncertainty). Bending responses ranging from -0.33 to + 0.21 dB/m(-1) (depending on orientation) are experimentally demonstrated with bending from 0 to 3.03 m(-1). In the third (axial) direction, the strain is obtained directly by the shift of the TFBG Bragg wavelengths with a sensitivity of 1.06 pm/με. PMID:26617191

  16. Compact Optical Fiber 3D Shape Sensor Based on a Pair of Orthogonal Tilted Fiber Bragg Gratings

    PubMed Central

    Feng, Dingyi; Zhou, Wenjun; Qiao, Xueguang; Albert, Jacques

    2015-01-01

    In this work, a compact fiber-optic 3D shape sensor consisting of two serially connected 2° tilted fiber Bragg gratings (TFBGs) is proposed, where the orientations of the grating planes of the two TFBGs are orthogonal. The measurement of the reflective transmission spectrum from the pair of TFBGs was implemented by Fresnel reflection of the cleaved fiber end. The two groups of cladding mode resonances in the reflection spectrum respond differentially to bending, which allows for the unique determination of the magnitude and orientation of the bend plane (i.e. with a ± 180 degree uncertainty). Bending responses ranging from −0.33 to + 0.21 dB/m−1 (depending on orientation) are experimentally demonstrated with bending from 0 to 3.03 m−1. In the third (axial) direction, the strain is obtained directly by the shift of the TFBG Bragg wavelengths with a sensitivity of 1.06 pm/με. PMID:26617191

  17. 3D Radiative Transfer Effects in Multi-Angle/Multi-Spectral Radio-Polarimetric Signals from a Mixture of Clouds and Aerosols Viewed by a Non-Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-01-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal--not noise--for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  18. Portable digital lidar: a compact stand-off bioagent aerosol sensor

    NASA Astrophysics Data System (ADS)

    Prasad, Coorg R.; Lee, Hyo Sang; Hwang, In H.; Nam, Matthew; Mathur, Savyasachee L.; Ranganayakamma, Belthur

    2001-08-01

    Remote detection of biological warfare agents (BWA) is crucial for providing early warning to ensure maximum survivability of personnel in the battlefield and other sensitive areas. Although the current generation of stand- off aerosol and fluorescence lidars have demonstrated stand- off detection and identification of BWA, their large size and cost make them difficult for field use. We have introduced a new eye-safe portable digital lidar (PDL) technique based on digital detection that achieves orders of magnitude reduction in the size, cost and complexity over the conventional lidar, while providing an equal or better sensitivity and range. Excellent performance has been obtained with two of our PDL sensors during two bio-aerosol measurement campaigns carried out at Dugway Proving Grounds. In the JFT 4.5 (Oct 98) tests, high aerosol sensitivity of 300 ppl of biosimulant particles at up to 3 km was demonstrated with an eye-safe wavelength (523nm) aerosol micro PDL that utilized a 8 inch telescope, <10(mu) J/pulse energy at 2.5kHz, photon counting digital detection and 2 sec averaging. For the JBREWS DFT (June 99) tests an eye-safe two wavelengths (523nm and 1.047mum) horizontally scanned, aerosol micro PDL with the same 8 inch telescope was utilized. With this lidar, high sensitivity, preliminary differentiation between natural and unusual clouds, and the ability to track the aerosol cloud location, their wind speed and direction were also demonstrated. Lidar simulations of both PDL and conventional analog detection have been performed. Based on these model calculations and experimental results an analysis and comparison of the inherent capabilities of two types of systems is given.

  19. A fluorescence LIDAR sensor for hyper-spectral time-resolved remote sensing and mapping.

    PubMed

    Palombi, Lorenzo; Alderighi, Daniele; Cecchi, Giovanna; Raimondi, Valentina; Toci, Guido; Lognoli, David

    2013-06-17

    In this work we present a LIDAR sensor devised for the acquisition of time resolved laser induced fluorescence spectra. The gating time for the acquisition of the fluorescence spectra can be sequentially delayed in order to achieve fluorescence data that are resolved both in the spectral and temporal domains. The sensor can provide sub-nanometric spectral resolution and nanosecond time resolution. The sensor has also imaging capabilities by means of a computer-controlled motorized steering mirror featuring a biaxial angular scanning with 200 μradiant angular resolution. The measurement can be repeated for each point of a geometric grid in order to collect a hyper-spectral time-resolved map of an extended target. PMID:23787661

  20. Sensor-enhanced 3D conformal cueing for safe and reliable HC operation in DVE in all flight phases

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Schafhitzel, Tobias; Strobel, Michael; Völschow, Philipp; Klasen, Stephanus; Eisenkeil, Ferdinand

    2014-06-01

    Low level helicopter operations in Degraded Visual Environment (DVE) still are a major challenge and bear the risk of potentially fatal accidents. DVE generally encompasses all degradations to the visual perception of the pilot ranging from night conditions via rain and snowfall to fog and maybe even blinding sunlight or unstructured outside scenery. Each of these conditions reduce the pilots' ability to perceive visual cues in the outside world reducing his performance and finally increasing risk of mission failure and accidents, like for example Controlled Flight Into Terrain (CFIT). The basis for the presented solution is a fusion of processed and classified high resolution ladar data with database information having a potential to also include other sensor data like forward looking or 360° radar data. This paper reports on a pilot assistance system aiming at giving back the essential visual cues to the pilot by means of displaying 3D conformal cues and symbols in a head-tracked Helmet Mounted Display (HMD) and a combination with synthetic view on a head-down Multi-Function Display (MFD). Each flight phase and each flight envelope requires different symbology sets and different possibilities for the pilots to select specific support functions. Several functionalities have been implemented and tested in a simulator as well as in flight. The symbology ranges from obstacle warning symbology via terrain enhancements through grids or ridge lines to different waypoint symbols supporting navigation. While some adaptations can be automated it emerged as essential that symbology characteristics and completeness can be selected by the pilot to match the relevant flight envelope and outside visual conditions.

  1. Doppler Lidar Sensor for Precision Landing on the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry; Hines, Glenn; Barnes, Bruce; Pierrottet, Diego; Lockhard, George

    2012-01-01

    Landing mission concepts that are being developed for exploration of planetary bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe soft landing at the pre-designated sites. To address this need, a Doppler lidar is being developed by NASA under the Autonomous Landing and Hazard Avoidance (ALHAT) project. This lidar sensor is a versatile instrument capable of providing precision velocity vectors, vehicle ground relative altitude, and attitude. The capabilities of this advanced technology have been demonstrated through two helicopter flight test campaigns conducted over a vegetation-free terrain in 2008 and 2010. Presently, a prototype version of this sensor is being assembled for integration into a rocket-powered terrestrial free-flyer vehicle. Operating in a closed loop with vehicle's guidance and navigation system, the viability of this advanced sensor for future landing missions will be demonstrated through a series of flight tests in 2012.

  2. Overview of the first Multicenter Airborne Coherent Atmospheric Wind Sensor (MACAWS) experiment: conversion of a ground-based lidar for airborne applications

    NASA Astrophysics Data System (ADS)

    Howell, James N.; Hardesty, R. Michael; Rothermel, Jeffrey; Menzies, Robert T.

    1996-11-01

    The first Multi center Airborne Coherent Atmospheric Wind Sensor (MACAWS) field experiment demonstrated an airborne high energy TEA CO2 Doppler lidar system for measurement of atmospheric wind fields and aerosol structure. The system was deployed on the NASA DC-8 during September 1995 in a series of checkout flights to observe several important atmospheric phenomena, including upper level winds in a Pacific hurricane, marine boundary layer winds, cirrus cloud properties, and land-sea breeze structure. The instrument, with its capability to measure 3D winds and backscatter fields, promises to be a valuable tool for climate and global change, severe weather, and air quality research. In this paper, we describe the airborne instrument, assess its performance, discuss future improvements, and show some preliminary results from the September experiments.

  3. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  4. Using genetic algorithms to optimize an active sensor network on a stiffened aerospace panel with 3D scanning laser vibrometry data

    NASA Astrophysics Data System (ADS)

    Marks, R.; Clarke, A.; Featherston, C.; Kawashita, L.; Paget, C.; Pullin, R.

    2015-07-01

    With the increasing complexity of aircraft structures and materials there is an essential need to continually monitor the structure for damage. This also drives the requirement for optimizing the location of sensors for damage detection to ensure full damage detection coverage of the structure whilst minimizing the number of sensors required, hence reducing costs, weight and data processing. An experiment was carried out to investigate the optimal sensor locations of an active sensor network for detecting adhesive disbonds of a stiffened panel. A piezoelectric transducer was coupled to two different stiffened aluminium panels; one healthy and one with a 25.4mm long disbond. The transducer was positioned at five individual locations to assess the effectiveness of damage detection at different transmission locations. One excitation frequency of 100kHz was used for this study. The panels were scanned with a 3D scanning laser vibrometer which represented a network of ‘ideal’ receiving transducers. The responses measured on the disbonded panel were cross- correlated with those measured on the healthy panel at a large number of potential sensor locations. This generated a cost surface which a genetic algorithm could interrogate in order to find the optimal sensor locations for a given size of sensor network. Probabilistic techniques were used to consider multiple disbond location scenarios, in order to optimise the sensor network for maximum probability of detection across a range of disbond locations.

  5. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2004-12-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  6. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  7. A Distributed Fiber Optic Sensor Network for Online 3-D Temperature and Neutron Fluence Mapping in a VHTR Environment

    SciTech Connect

    Tsvetkov, Pavel; Dickerson, Bryan; French, Joseph; McEachern, Donald; Ougouag, Abderrafi

    2014-04-30

    Robust sensing technologies allowing for 3D in-core performance monitoring in real time are of paramount importance for already established LWRs to enhance their reliability and availability per year, and therefore, to further facilitate their economic competitiveness via predictive assessment of the in-core conditions.

  8. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  9. Tropospheric Airborne Meteorological Data Reporting (TAMDAR) Sensor Validation and Verification on National Oceanographic and Atmospheric Administration (NOAA) Lockheed WP-3D Aircraft

    NASA Technical Reports Server (NTRS)

    Tsoucalas, George; Daniels, Taumi S.; Zysko, Jan; Anderson, Mark V.; Mulally, Daniel J.

    2010-01-01

    As part of the National Aeronautics and Space Administration's Aviation Safety and Security Program, the Tropospheric Airborne Meteorological Data Reporting project (TAMDAR) developed a low-cost sensor for aircraft flying in the lower troposphere. This activity was a joint effort with support from Federal Aviation Administration, National Oceanic and Atmospheric Administration, and industry. This paper reports the TAMDAR sensor performance validation and verification, as flown on board NOAA Lockheed WP-3D aircraft. These flight tests were conducted to assess the performance of the TAMDAR sensor for measurements of temperature, relative humidity, and wind parameters. The ultimate goal was to develop a small low-cost sensor, collect useful meteorological data, downlink the data in near real time, and use the data to improve weather forecasts. The envisioned system will initially be used on regional and package carrier aircraft. The ultimate users of the data are National Centers for Environmental Prediction forecast modelers. Other users include air traffic controllers, flight service stations, and airline weather centers. NASA worked with an industry partner to develop the sensor. Prototype sensors were subjected to numerous tests in ground and flight facilities. As a result of these earlier tests, many design improvements were made to the sensor. The results of tests on a final version of the sensor are the subject of this report. The sensor is capable of measuring temperature, relative humidity, pressure, and icing. It can compute pressure altitude, indicated air speed, true air speed, ice presence, wind speed and direction, and eddy dissipation rate. Summary results from the flight test are presented along with corroborative data from aircraft instruments.

  10. Automatic Construction of 3D Basic-Semantic Models of Inhabited Interiors Using Laser Scanners and RFID Sensors

    PubMed Central

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos

    2012-01-01

    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results. PMID:22778609

  11. Coherent lidar airborne wind sensor II: flight-test results at 2 and 10 νm.

    PubMed

    Targ, R; Steakley, B C; Hawley, J G; Ames, L L; Forney, P; Swanson, D; Stone, R; Otto, R G; Zarifis, V; Brockman, P; Calloway, R S; Klein, S H; Robinson, P A

    1996-12-20

    The use of airborne laser radar (lidar) to measure wind velocities and to detect turbulence in front of an aircraft in real time can significantly increase fuel efficiency, flight safety, and terminal area capacity. We describe the flight-test results for two coherent lidar airborne shear sensor (CLASS) systems and discuss their agreement with our theoretical simulations. The 10.6-μm CO(2) system (CLASS-10) is a flying brassboard; the 2.02-μm Tm:YAG solid-state system (CLASS-2) is configured in a rugged, light-weight, high-performance package. Both lidars have shown a wind measurement accuracy of better than 1 m/s. PMID:21151317

  12. Lidar Sensor Performance in Closed-Loop Flight Testing of the Morpheus Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, V. Eric; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Bulyshev, Alexander E.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.

    2015-01-01

    For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging Flash Lidar is a second generation, compact, real-time, aircooled instrument developed from a number of components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The Flash Lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision (1-s). The Flash Lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Navigation Doppler Lidar (NDL) system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The NDLâ€"TM"s measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter (LA), also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the Flash Lidar, can provide range along a separate vector. The LA measurements are also fed

  13. System performance and modeling of a bioaerosol detection lidar sensor utilizing polarization diversity

    NASA Astrophysics Data System (ADS)

    Glennon, John J.; Nichols, Terry; Gatt, Phillip; Baynard, Tahllee; Marquardt, John H.; Vanderbeek, Richard G.

    2009-05-01

    The weaponization and dissemination of biological warfare agents (BWA) constitute a high threat to civilians and military personnel. An aerosol release, disseminated from a single point, can directly affect large areas and many people in a short time. Because of this threat real-time standoff detection of BWAs is a key requirement for national and military security. BWAs are a general class of material that can refer to spores, bacteria, toxins, or viruses. These bioaerosols have a tremendous size, shape, and chemical diversity that, at present, are not well characterized [1]. Lockheed Martin Coherent Technologies (LMCT) has developed a standoff lidar sensor with high sensitivity and robust discrimination capabilities with a size and ruggedness that is appropriate for military use. This technology utilizes multiwavelength backscatter polarization diversity to discriminate between biological threats and naturally occurring interferents such as dust, smoke, and pollen. The optical design and hardware selection of the system has been driven by performance modeling leading to an understanding of measured system sensitivity. Here we briefly discuss the challenges of standoff bioaerosol discrimination and the approach used by LMCT to overcome these challenges. We review the radiometric calculations involved in modeling direct-detection of a distributed aerosol target and methods for accurately estimating wavelength dependent plume backscatter coefficients. Key model parameters and their validation are discussed and outlined. Metrics for sensor sensitivity are defined, modeled, and compared directly to data taken at Dugway Proving Ground, UT in 2008. Sensor sensitivity is modeled to predict performance changes between day and night operation and in various challenging environmental conditions.

  14. Neutron measurements with ultra-thin 3D silicon sensors in a radiotherapy treatment room using a Siemens PRIMUS linac

    NASA Astrophysics Data System (ADS)

    Guardiola, C.; Gómez, F.; Fleta, C.; Rodríguez, J.; Quirion, D.; Pellegrini, G.; Lousa, A.; Martínez-de-Olcoz, L.; Pombar, M.; Lozano, M.

    2013-05-01

    The accurate detection and dosimetry of neutrons in mixed and pulsed radiation fields is a demanding instrumental issue with great interest both for the industrial and medical communities. In recent studies of neutron contamination around medical linacs, there is a growing concern about the secondary cancer risk for radiotherapy patients undergoing treatment in photon modalities at energies greater than 6 MV. In this work we present a promising alternative to standard detectors with an active method to measure neutrons around a medical linac using a novel ultra-thin silicon detector with 3D electrodes adapted for neutron detection. The active volume of this planar device is only 10 µm thick, allowing a high gamma rejection, which is necessary to discriminate the neutron signal in the radiotherapy peripheral radiation field with a high gamma background. Different tests have been performed in a clinical facility using a Siemens PRIMUS linac at 6 and 15 MV. The results show a good thermal neutron detection efficiency around 2% and a high gamma rejection factor.

  15. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  16. Tooteko: a Case Study of Augmented Reality for AN Accessible Cultural Heritage. Digitization, 3d Printing and Sensors for AN Audio-Tactile Experience

    NASA Astrophysics Data System (ADS)

    D'Agnano, F.; Balletti, C.; Guerra, F.; Vernier, P.

    2015-02-01

    Tooteko is a smart ring that allows to navigate any 3D surface with your finger tips and get in return an audio content that is relevant in relation to the part of the surface you are touching in that moment. Tooteko can be applied to any tactile surface, object or sheet. However, in a more specific domain, it wants to make traditional art venues accessible to the blind, while providing support to the reading of the work for all through the recovery of the tactile dimension in order to facilitate the experience of contact with art that is not only "under glass." The system is made of three elements: a high-tech ring, a tactile surface tagged with NFC sensors, and an app for tablet or smartphone. The ring detects and reads the NFC tags and, thanks to the Tooteko app, communicates in wireless mode with the smart device. During the tactile navigation of the surface, when the finger reaches a hotspot, the ring identifies the NFC tag and activates, through the app, the audio track that is related to that specific hotspot. Thus a relevant audio content relates to each hotspot. The production process of the tactile surfaces involves scanning, digitization of data and 3D printing. The first experiment was modelled on the facade of the church of San Michele in Isola, made by Mauro Codussi in the late fifteenth century, and which marks the beginning of the Renaissance in Venice. Due to the absence of recent documentation on the church, the Correr Museum asked the Laboratorio di Fotogrammetria to provide it with the aim of setting up an exhibition about the order of the Camaldolesi, owners of the San Michele island and church. The Laboratorio has made the survey of the facade through laser scanning and UAV photogrammetry. The point clouds were the starting point for prototypation and 3D printing on different supports. The idea of the integration between a 3D printed tactile surface and sensors was born as a final thesis project at the Postgraduate Mastercourse in Digital

  17. Height reconstruction techniques for synthetic aperture lidar systems

    NASA Technical Reports Server (NTRS)

    Chen, Curtis W.; Hensley, Scott

    2003-01-01

    The data-processing techniques and acquisition modes of a synthetic aperture lidar (SAL) instrument operating at optical wavelengths are closely related to the analogous modes of a synthetic aperture radar (SAR) instrument operating at microwave frequencies. It is consequently natural to explore the applicability of SAR processing techniques to SAL sensors. In this paper, we examine the feasibility of adopting SAR height-reconstruction techniques with SAL sensors to obtain high-resolution 3-D imagery at optical wavelengths.

  18. Real-time 3D change detection of IEDs

    NASA Astrophysics Data System (ADS)

    Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David

    2012-06-01

    Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.

  19. 3D integration approaches for MEMS and CMOS sensors based on a Cu through-silicon-via technology and wafer level bonding

    NASA Astrophysics Data System (ADS)

    Hofmann, L.; Dempwolf, S.; Reuter, D.; Ecke, R.; Gottfried, K.; Schulz, S. E.; Knechtel, R.; Geßner, T.

    2015-05-01

    Technologies for the 3D integration are described within this paper with respect to devices that have to retain a specific minimum wafer thickness for handling purposes (CMOS) and integrity of mechanical elements (MEMS). This implies Through-Silicon Vias (TSVs) with large dimensions and high aspect ratios (HAR). Moreover, as a main objective, the aspired TSV technology had to be universal and scalable with the designated utilization in a MEMS/CMOS foundry. Two TSV approaches are investigated and discussed, in which the TSVs were fabricated either before or after wafer thinning. One distinctive feature is an incomplete TSV Cu-filling, which avoids long processing and complex process control, while minimizing the thermomechanical stress between Cu and Si and related adverse effects in the device. However, the incomplete filling also includes various challenges regarding process integration. A method based on pattern plating is described, in which TSVs are metalized at the same time as the redistribution layer and which eliminates the need for additional planarization and patterning steps. For MEMS, the realization of a protective hermetically sealed capping is crucial, which is addressed in this paper by glass frit wafer level bonding and is discussed for hermetic sealing of MEMS inertial sensors. The TSV based 3D integration technologies are demonstrated on CMOS like test vehicle and on a MEMS device fabricated in Air Gap Insulated Microstructure (AIM) technology.

  20. 2D and 3D soil moisture imaging using a sensor-based platform moving inside a subsurface network of pipes

    NASA Astrophysics Data System (ADS)

    Gravalos, I.; Moshou, D.; Loutridis, S.; Gialamas, Th.; Kateris, D.; Bompolas, E.; Tsiropoulos, Z.; Xyradakis, P.; Fountas, S.

    2013-08-01

    In this study a prototype sensor-based platform moving inside a subsurface network of pipes with the task of monitoring the soil moisture content is presented. It comprises of a mobile platform, a modified commercial soil moisture sensor (Diviner 2000), a network of subsurface polyvinylchloride (PVC) access pipes, driving hardware and image processing software. The software allows the composition of two-dimensional (2D) or three-dimensional (3D) images with high accuracy and at a large scale. The 3D soil moisture images are created by using 2D slices for better illustration of the soil moisture variability. Three case studies of varying soil moisture content using an experimental soil tank were examined. In the first case study, the irrigation water was applied uniformly on the entire tank surface. In second and third case studies, the irrigation water was applied uniformly only on the surface of the intermediate and last part of the soil tank respectively. The processed images give a detailed description of the soil moisture distribution of a layer at 15 cm depth under the soil surface in the tank. In all case studies that have been investigated, the distribution of soil moisture was characterized by a significant variability (difference between poorly and well-drained regions) of the soil tank. A very poorly-drained region was located in the middle of the soil tank, while well-drained soil areas were located southwest and northeast. The knowledge of the spatial and temporal distribution of soil moisture is a valuable tool for proper management of crop irrigation.

  1. Turbulent CO2 Flux Measurements by Lidar: Length Scales, Results and Comparison with In-Situ Sensors

    NASA Technical Reports Server (NTRS)

    Gilbert, Fabien; Koch, Grady J.; Beyon, Jeffrey Y.; Hilton, Timothy W.; Davis, Kenneth J.; Andrews, Arlyn; Ismail, Syed; Singh, Upendra N.

    2009-01-01

    The vertical CO2 flux in the atmospheric boundary layer (ABL) is investigated with a Doppler differential absorption lidar (DIAL). The instrument was operated next to the WLEF instrumented tall tower in Park Falls, Wisconsin during three days and nights in June 2007. Profiles of turbulent CO2 mixing ratio and vertical velocity fluctuations are measured by in-situ sensors and Doppler DIAL. Time and space scales of turbulence are precisely defined in the ABL. The eddy-covariance method is applied to calculate turbulent CO2 flux both by lidar and in-situ sensors. We show preliminary mean lidar CO2 flux measurements in the ABL with a time and space resolution of 6 h and 1500 m respectively. The flux instrumental errors decrease linearly with the standard deviation of the CO2 data, as expected. Although turbulent fluctuations of CO2 are negligible with respect to the mean (0.1 %), we show that the eddy-covariance method can provide 2-h, 150-m range resolved CO2 flux estimates as long as the CO2 mixing ratio instrumental error is no greater than 10 ppm and the vertical velocity error is lower than the natural fluctuations over a time resolution of 10 s.

  2. Tracking Efficiency And Charge Sharing of 3D Silicon Sensors at Different Angles in a 1.4T Magnetic Field

    SciTech Connect

    Gjersdal, H.; Bolle, E.; Borri, M.; Da Via, C.; Dorholt, O.; Fazio, S.; Grenier, P.; Grinstein, S. Hansson, P.; Hasi, J.; Hugging, F.; Jackson, P.; Kenney, C.; Kocian, M.; La Rosa, A.; Mastroberardino, A.; Nordahl, P.; Rivero, F.; Rohne, O.; Sandaker, H.; Sjobaek, K.; /Oslo U. /Prague, Tech. U. /SLAC /Bonn U. /SUNY, Stony Brook /Bonn U. /SLAC

    2012-05-07

    A 3D silicon sensor fabricated at Stanford with electrodes penetrating throughout the entire silicon wafer and with active edges was tested in a 1.4 T magnetic field with a 180 GeV/c pion beam at the CERN SPS in May 2009. The device under test was bump-bonded to the ATLAS pixel FE-I3 readout electronics chip. Three readout electrodes were used to cover the 400 {micro}m long pixel side, this resulting in a p-n inter-electrode distance of {approx} 71 {micro}m. Its behavior was confronted with a planar sensor of the type presently installed in the ATLAS inner tracker. Time over threshold, charge sharing and tracking efficiency data were collected at zero and 15{sup o} angles with and without magnetic field. The latest is the angular configuration expected for the modules of the Insertable B-Layer (IBL) currently under study for the LHC phase 1 upgrade expected in 2014.

  3. Lidar-equipped uav for building information modelling

    NASA Astrophysics Data System (ADS)

    Roca, D.; Armesto, J.; Lagüela, S.; Díaz-Vilariño, L.

    2014-06-01

    The trend to minimize electronic devices in the last decades accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices, resulting in a strong revolution in the surveying and mapping industries. However, only within the last few years the LIDAR sensor technology has achieved sufficiently reduction in terms of size and weight to be considered for UAV platforms. This paper presents an innovative solution to capture point cloud data from a Lidar-equipped UAV and further perform the 3D modelling of the whole envelope of buildings in BIM format. A mini-UAV platform is used (weigh less than 5 kg and up to 1.5 kg of sensor payload), and data from two different acquisition methodologies is processed and compared with the aim at finding the optimal configuration for the generation of 3D models of buildings for energy studies

  4. Comparison of Riparian Evapotranspiration Estimated Using Raman LIDAR and Water Balance Based Estimates from a Soil Moisture Sensor Network

    NASA Astrophysics Data System (ADS)

    Solis, J. A.; Rajaram, H.; Whittemore, D. O.; Butler, J. J.; Eichinger, W. E.; Reboulet, E. C.

    2013-12-01

    Riparian evapotranspiration (RET) is an important component of basin-wide evapotranspiration (ET), especially in subhumid to semi-arid regions, with significant impacts on water management and conservation. A common method of measuring ET is using the eddy correlation technique. However, since most riparian zones are narrow, eddy correlation techniques are not applicable because of limited fetch distance. Techniques based on surface-subsurface water balance are applicable in these situations, but their accuracy is not well constrained. In this study, we estimated RET within a 100 meter long and 40 meter wide riparian zone along Rock Creek in the Whitewater Basin in central Kansas using a water balance approach and Raman LIDAR measurements. A total of six soil moisture profiles (with six soil moisture sensors in each profile) and water-table measurements were used to estimate subsurface water storage (total soil moisture, TSM). The Los Alamos National Laboratory (LANL)-University of Iowa (UI) Raman LIDAR was used to measure the water vapor concentrations in three dimensions where the Monin-Obukhov similarity theory was used to obtain the spatially resolved fluxes. The LIDAR system included a 1.064 micron Nd:YAG laser with a Cassagrain telescope with a laser pulse of 50Hz with 25mJ of energy per pulse. Estimates of RET obtained from TSM changes were compared to LIDAR estimates obtained from three-dimensional water vapor concentrations of the atmosphere directly above and downwind of the riparian vegetation. The LIDAR measurements help to validate the TSM based estimates of RET and constrain their accuracy. RET estimates obtained from TSM changes in individual soil moisture profiles exhibited a large variability (up to a factor 8). This variability results from the highly heterogeneous soils in the vadose zone (2-3 m thick), where soil moisture (rather than groundwater) is the major source of water for riparian vegetation. Variable vegetation density and species also

  5. Calibration of a water vapour Raman lidar with a kite-based humidity sensor

    NASA Astrophysics Data System (ADS)

    Totems, Julien; Chazette, Patrick

    2016-03-01

    We present a calibration method for a water vapour Raman lidar using a meteorological probe lifted by a kite, flown steadily above the lidar site, within the framework of the Hydrological Cycle in the Mediterranean Experiment (HyMeX) and Chemistry-Aerosol Mediterranean Experiment (ChArMEx) campaigns. The experiment was carried out in Menorca (Spain) during June 2013, using the mobile water vapour and aerosol lidar WALI. Calibration using a kite demonstrated a much better degree of co-location with the lidar system than that which could be achieved with radiosondes, and it allowed us to determine the overlap function and calibration factor simultaneously. The range-dependent water vapour lidar calibration was thus determined with an uncertainty of 2 % in the 90-8000 m altitude range. Lidar water vapour measurements are further compared with radiosondes, showing very good agreement in the lower troposphere (1-5 km) and a relative difference and standard deviation of 5 and 9 % respectively. Moreover, a reasonable agreement with MODIS-integrated water vapour content is found, with a relative mean and standard deviation of 3 and 16 % respectively. However, a discrepancy is found with AERONET retrievals, showing the latter to be underestimated by 28 %. Reanalyses by the ECMWF/IFS numerical weather prediction model also agree with the temporal evolution highlighted with the lidar, with no measurable drift in integrated water vapour content over the period.

  6. Modeling Diurnal and Seasonal 3D Light Profiles in Amazon Forests

    NASA Astrophysics Data System (ADS)

    Morton, D. C.; Rubio, J.; Gastellu-Etchegorry, J.; Cook, B. D.; Hunter, M. O.; Yin, T.; Nagol, J. R.; Keller, M. M.

    2013-12-01

    The complex horizontal and vertical structure in tropical forests generates a diversity of light environments for canopy and understory trees. These 3D light profiles are dynamic on diurnal and seasonal time scales based on changes in solar illumination and the fraction of diffuse light. Understanding this variability is critical for improving ecosystem models and interpreting optical and LiDAR remote sensing data from tropical forests. Here, we initialized the Discrete Anisotropic Radiative Transfer (DART) model using dense airborne LiDAR data (>20 returns m2) from three forest sites in the central and eastern Amazon. Forest scenes derived from airborne LiDAR data were tested using modeled and observed large-footprint LiDAR data from the ICESat-GLAS sensor. Next, diurnal and seasonal profiles of photosynthetically active radiation (PAR) for each forest site were simulated under clear sky and cloudy conditions using DART. Incident PAR was summarized for canopy, understory, and ground levels. Our study illustrates the importance of realistic canopy models for accurate representation of LiDAR and optical radiative transfer. In particular, canopy rugosity and ground topography information from airborne LiDAR data provided critical 3D information that cannot be recreated using stem maps and allometric relationships for crown dimensions. The spatial arrangement of canopy trees altered PAR availability, even for dominant individuals, compared to downwelling measurements from nearby eddy flux towers. Pseudo-realistic branch and leaf architecture was also essential for recreating multiple scattering within canopies at near-infrared wavelengths commonly used for LiDAR remote sensing and quantifying PAR attenuation from shading within and between canopies. These findings point to the need for more spatial information on forest structure to improve the representation of light availability in models of tropical forest productivity.

  7. High-Fidelity Flash Lidar Model Development

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Pierrottet, Diego F.; Amzajerdian, Farzin

    2014-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technologies (ALHAT) project is currently developing the critical technologies to safely and precisely navigate and land crew, cargo and robotic spacecraft vehicles on and around planetary bodies. One key element of this project is a high-fidelity Flash Lidar sensor that can generate three-dimensional (3-D) images of the planetary surface. These images are processed with hazard detection and avoidance and hazard relative navigation algorithms, and then are subsequently used by the Guidance, Navigation and Control subsystem to generate an optimal navigation solution. A complex, high-fidelity model of the Flash Lidar was developed in order to evaluate the performance of the sensor and its interaction with the interfacing ALHAT components on vehicles with different configurations and under different flight trajectories. The model contains a parameterized, general approach to Flash Lidar detection and reflects physical attributes such as range and electronic noise sources, and laser pulse temporal and spatial profiles. It also provides the realistic interaction of the laser pulse with terrain features that include varying albedo, boulders, craters slopes and shadows. This paper gives a description of the Flash Lidar model and presents results from the Lidar operating under different scenarios.

  8. High-fidelity flash lidar model development

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Pierrottet, Diego F.; Amzajerdian, Farzin

    2014-06-01

    NASA's Autonomous Landing and Hazard Avoidance Technologies (ALHAT) project is currently developing the critical technologies to safely and precisely navigate and land crew, cargo and robotic spacecraft vehicles on and around planetary bodies. One key element of this project is a high-fidelity Flash Lidar sensor that can generate three-dimensional (3-D) images of the planetary surface. These images are processed with hazard detection and avoidance and hazard relative navigation algorithms, and then are subsequently used by the Guidance, Navigation and Control subsystem to generate an optimal navigation solution. A complex, high-fidelity model of the Flash Lidar was developed in order to evaluate the performance of the sensor and its interaction with the interfacing ALHAT components on vehicles with different configurations and under different flight trajectories. The model contains a parameterized, general approach to Flash Lidar detection and reflects physical attributes such as range and electronic noise sources, and laser pulse temporal and spatial profiles. It also provides the realistic interaction of the laser pulse with terrain features that include varying albedo, boulders, craters slopes and shadows. This paper gives a description of the Flash Lidar model and presents results from the Lidar operating under different scenarios.

  9. Potential use of Spaceborne Lidar Measurements to Improve Atmospheric Temperature Retrievals from Passive Sensors.

    PubMed

    Chazette, P; Mégie, G; Pelon, J

    1998-11-20

    A preliminary study of the synergism between active and passive spaceborne remote sensing systems has been conducted on the basis of new prospects for the implementation of lidar systems on space platforms for global scale measurements. Assuming a quasi-simultaneity in the measurements performed with an active backscatter lidar and with operational meteorological packages such as the Television Infrared Operational Satellite (TIROS)-N Operational Vertical Sounder radiometers, it is shown that combining both measurements could lead to an improvement in the accuracy of the retrieved vertical temperature profile in the lower troposphere. We used a modified version of the improved initialization inversion operational algorithm, to process the TIROS-N Operational Vertical Sounder data, taking into account the lidar measurements of cloud heights to define a temperature reference. New perspectives for the coupling of lidar and passive radiometers are discussed. PMID:18301603

  10. Simulated lidar waveforms for understanding factors affecting waveform shape

    NASA Astrophysics Data System (ADS)

    Kim, Angela M.; Olsen, Richard C.

    2011-06-01

    Full-waveform LIDAR is a technology which enables the analysis of the 3-D structure and arrangement of objects. An in-depth understanding of the factors that affect the shape of the full-waveform signal is required in order to extract as much information as possible from the signal. A simple model of LIDAR propagation has been created which simulates the interaction of LIDAR energy with objects in a scene. A 2-dimensional model tree allows controlled manipulation of the geometric arrangement of branches and leaves with varying spectral properties. Results suggest complex interactions of the LIDAR energy with the tree canopy, including the occurrence of multiple bounces for energy reaching the ground under the canopy. Idealized sensor instrument response functions incorporated in the simulation illustrate a large impact on waveform shape. A waveform recording laser rangefinder has been built which will allow validation or model results; preliminary collection results are presented here.

  11. Lidar Systems for Precision Navigation and Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierrottet, Diego F.; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.

    2011-01-01

    The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision. Currently, NASA is developing novel lidar sensors aimed at needs of future planetary landing missions. These lidar sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain that indicate hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase of a landing vehicle, at about 1 km above the ground, can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative velocity and distance data allowing for precision navigation to the landing site. Our Doppler lidar utilizes three laser beams pointed to different directions to measure line of sight velocities and ranges to the ground from altitudes of over 2 km. Throughout the landing trajectory starting at altitudes of about 20 km, the Laser Altimeter can provide very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehicle navigation system. At altitudes from approximately 15 km to 10 km, either the Laser Altimeter or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters, to perform Terrain relative Navigation thus further reducing the vehicle s relative position error. This paper describes the operational capabilities of each lidar sensor and provides a status of their development. Keywords: Laser Remote Sensing, Laser Radar, Doppler Lidar, Flash Lidar, 3-D Imaging, Laser Altimeter, Precession Landing, Hazard Detection

  12. The development and test of a device for the reconstruction of 3-D position and orientation by means of a kinematic sensor assembly with rate gyroscopes and accelerometers.

    PubMed

    Giansanti, Daniele; Maccioni, Giovanni; Macellari, Velio

    2005-07-01

    In this paper, we propose a device for the Position and Orientation (P&O) reconstruction of human segmental locomotion tasks. It is based on three mono-axial accelerometers and three angular velocity sensors, geometrically arranged to form two orthogonal terns. The device was bench tested using step-by-step motor-based equipment. The characteristics of the six channels under bench test conditions were: crosstalk absent, non linearity < +/- 0.1% fs, hysteresis < 0.1% fs, accuracy 0.3% fs, overall resolution better than 0.04 deg/s, 2 x g x 10(-4). The device was validated with the stereophotogrammetric body motion analyzer during the execution of three different locomotion tasks: stand-to-sit, sit-to-stand, gait-initiation. Results obtained comparing the trajectories of the two methods showed that the errors were lower than 3 x 10(-2) m and 2 deg during a 4s of acquisition and lower than 6 x 10(-3) m and 0.2 deg during the effective duration of a locomotory task; showing that the wearable device hereby presented permits the 3-D reconstruction of the movement of the body segment to which it is affixed for time-limited clinical applications. PMID:16041990

  13. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  14. A comparison of Doppler lidar wind sensors for Earth-orbit global measurement applications

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.

    1985-01-01

    Now, there are four Doppler lidar configurations which are being promoted for the measurement of tropospheric winds: (1) the coherent CO2 Lidar, operating in the 9 micrometer region using a pulsed, atmospheric pressure CO2 gas discharge laser transmitter, and heterodyne detection; (2) the coherent Neodymium doped YAG or Glass Lidar, operating at 1.06 micrometers, using flashlamp or diode laser optical pumping of the solid state laser medium, and heterodyne detection; (3) the Neodymium doped YAG/Glass Lidar, operating at the doubled frequency (at 530 nm wavelength), again using flashlamp or diode laser pumping of the laser transmitter, and using a high resolution tandem Fabry-Perot filter and direct detection; and (4) the Raman shifted Xenon Chloride Lidar, operating at 350 nm wavelength, using a pulsed, atmospheric pressure XeCl gas discharge laser transmitter at 308 nm, Raman shifted in a high pressure hydrogen cell to 350 nm in order to avoid strong stratospheric ozone absorption, also using a high resolution tandem Fabry-Perot filter and direct detection. Comparisons of these four systems can include many factors and tradeoffs. The major portion of this comparison is devoted to efficiency. Efficiency comparisons are made by estimating the number of transmitted photons required for a single pulse wind velocity estimate of + or - 1 m/s accuracy in the middle troposphere, from an altitude of 800 km, which is assured to be reasonable for a polar orbiting platform.

  15. Hyperspectral-LIDAR system and data product integration for terrestrial applications

    NASA Astrophysics Data System (ADS)

    Corp, Lawrence A.; Cheng, Yen-Ben; Middleton, Elizabeth M.; Parker, Geoffrey G.; Huemmrich, K. Fred; Campbell, Petya K. E.

    2009-08-01

    This manuscript details the development and validation of a unique forward thinking instrument and methodology for monitoring terrestrial carbon dynamics through synthesis of existing hyperspectal sensing and Light Detection and Ranging (LIDAR) technologies. This technology demonstration is directly applicable to linking target mission concepts identified as scientific priorities in the National Research Council (NRC, 2007) Earth Science Decadal Survey; namely, DESDynI and HyspIRI. The primary components of the Hyperspec-LIDAR system are the ruggedized imaging spectrometer and a small footprint LIDAR system. The system is mounted on a heavy duty motorized pan-tilt unit programmed to support both push-broom style hyperspectral imaging and 3-D canopy LIDAR structural profiling. The integrated Hyperspec-LIDAR sensor system yields a hypserspectral data cube with up to 800 bands covering the spectral range of 400 to 1000 nm and a 3-D scanning LIDAR system accurately measuring the vertical distribution of intercepted surfaces within a range of 150 m with an accuracy of 15 mm. Preliminary field tests of the Hyperspec-LIDAR sensor system were conducted at a mature deciduous mixed forest tower site located at the Smithsonian Environmental Research Center in Edgewater, MD. The goal of this research is to produce integrated science and data products from ground observations that will support satellite-based hybrid spectral/structural profile linked through appropriate models to monitor Net Ecosystem Exchange and related parameters such as ecosystem Light Use Efficiency.

  16. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  17. Doppler lidar atmospheric wind sensors - A comparative performance evaluation for global measurement applications from earth orbit

    NASA Technical Reports Server (NTRS)

    Menzies, R. T.

    1986-01-01

    A comparison is made of four prominent Doppler lidar systems, ranging in wavelength from the near UV to the middle IR, which are presently being studied for their potential in an earth-orbiting global tropospheric wind field measurement application. The comparison is restricted to relative photon efficiencies, i.e., the required number of transmitted photons per pulse is calculated for each system for midtropospheric velocity estimate uncertainties ranging from + or - 1 to + or - 4 m/s. The results are converted to laser transmitter pulse energy and power requirements. The analysis indicates that a coherent CO2 Doppler lidar operating at 9.11-micron wavelength is the most efficient.

  18. Hybrid 3D laser sensor based on a high-performance long-range wide-field-of-view laser scanner and a calibrated high-resolution digital camera

    NASA Astrophysics Data System (ADS)

    Ullrich, Andreas; Studnicka, Nikolaus; Riegl, Johannes

    2004-09-01

    We present a hybrid sensor consisting of a high-performance 3D imaging laser sensor and a high-resolution digital camera. The laser sensor uses the time-of-flight principle based on near-infrared pulses. We demonstrate the performance capabilities of the system by presenting example data and we describe the software package used for data acquisition, data merging and visualization. The advantages of using both, near range photogrammetry and laser scanning, for data registration and data extraction are discussed.

  19. Study of Droplet Activation in Thin Clouds Using Ground-Based Raman Lidar and Ancillary Remote Sensors

    NASA Astrophysics Data System (ADS)

    Rosoldi, Marco; Madonna, Fabio; Gumà Claramunt, Pilar; Pappalardo, Gelsomina

    2016-06-01

    A methodology for the study of cloud droplet activation based on the measurements performed with ground-based multi-wavelength Raman lidars and ancillary remote sensors collected at CNR-IMAA observatory, Potenza, South Italy, is presented. The study is focused on the observation of thin warm clouds. Thin clouds are often also optically thin: this allows the cloud top detection and the full profiling of cloud layers using ground-based Raman lidar. Moreover, broken clouds are inspected to take advantage of their discontinuous structure in order to study the variability of optical properties and water vapor content in the transition from cloudy regions to cloudless regions close to the cloud boundaries. A statistical study of this variability leads to identify threshold values for the optical properties, enabling the discrimination between clouds and cloudless regions. These values can be used to evaluate and improve parameterizations of droplet activation within numerical models. A statistical study of the co-located Doppler radar moments allows to retrieve droplet size and vertical velocities close to the cloud base. First evidences of a correlation between droplet vertical velocities measured at the cloud base and the aerosol effective radius observed in the cloud-free regions of the broken clouds are found.

  20. Development of PM2.5 density distribution visualization system using ground-level sensor network and Mie lidar

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Akaho, Taiga; Kojiro, Yu; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Yamazaki, Akihiro; Arai, Kohei

    2014-10-01

    Atmospheric particulate matters (PM) are tiny pieces of solid or liquid matter associated with the Earth's atmosphere. They are suspended in the atmosphere as atmospheric aerosol. Recently, density of fine particles PM2.5, diameter of 2.5 micrometers or less, from China is serious environmental issue in East part of Asia. In this study, the authors have developed a PM2.5 density distribution visualization system using ground-level sensor network dataset and Mie lidar dataset. The former dataset is used for visualization of horizontal PM2.5 density distribution and movement analysis, the latter dataset is used for visualization of vertical PM2.5 density distribution and movement analysis.

  1. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  2. Space-Based Erbium-Doped Fiber Amplifier Transmitters for Coherent, Ranging, 3D-Imaging, Altimetry, Topology, and Carbon Dioxide Lidar and Earth and Planetary Optical Laser Communications

    NASA Astrophysics Data System (ADS)

    Storm, Mark; Engin, Doruk; Mathason, Brian; Utano, Rich; Gupta, Shantanu

    2016-06-01

    This paper describes Fibertek, Inc.'s progress in developing space-qualified Erbium-doped fiber amplifier (EDFA) transmitters for laser communications and ranging/topology, and CO2 integrated path differential absorption (IPDA) lidar. High peak power (1 kW) and 6 W of average power supporting multiple communications formats has been demonstrated with 17% efficiency in a compact 3 kg package. The unit has been tested to Technology Readiness Level (TRL) 6 standards. A 20 W EDFA suitable for CO2 lidar has been demonstrated with ~14% efficiency (electrical to optical [e-o]) and its performance optimized for 1571 nm operation.

  3. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  4. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  5. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  6. Carbon dioxide Doppler lidar wind sensor on a Space Station polar platform

    NASA Technical Reports Server (NTRS)

    Petheram, John C.; Frohbeiter, Greta; Rosenberg, A.

    1989-01-01

    A study has been performed of the feasibility of accommodating a carbon dioxide Doppler lidar on a Space Station polar platform. Results show that such an instrument could be accommodated on a single 1.5 x 2.25-m optical bench, mounted centrally on the earth facing side of the satellite. The power, weight, and thermal issues appear resolvable. However, the question of servicing the instrument remains open, until more data are available on the lifetime of an isotopic CO2 laser.

  7. An underwater chaotic lidar sensor based on synchronized blue laser diodes

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Luke K.; Dunn, Kaitlin J.; Bollt, Erik M.; Cochenour, Brandon; Jemison, William D.

    2016-05-01

    We present a novel chaotic lidar system designed for underwater impulse response measurements. The system uses two recently introduced, low-cost, commercially available 462 nm multimode InGaN laser diodes, which are synchronized by a bi-directional optical link. This synchronization results in a noise-like chaotic intensity modulation with over 1 GHz bandwidth and strong modulation depth. An advantage of this approach is its simple transmitter architecture, which uses no electrical signal generator, electro-optic modulator, or optical frequency doubler.

  8. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  9. Validate and update of 3D urban features using multi-source fusion

    NASA Astrophysics Data System (ADS)

    Arrington, Marcus; Edwards, Dan; Sengers, Arjan

    2012-06-01

    As forecast by the United Nations in May 2007, the population of the world transitioned from a rural to an urban demographic majority with more than half living in urban areas.1 Modern urban environments are complex 3- dimensional (3D) landscapes with 4-dimensional patterns of activity that challenge various traditional 1-dimensional and 2-dimensional sensors to accurately sample these man-made terrains. Depending on geographic location, data resulting from LIDAR, multi-spectral, electro-optical, thermal, ground-based static and mobile sensors may be available with multiple collects along with more traditional 2D GIS features. Reconciling differing data sources over time to correctly portray the dynamic urban landscape raises significant fusion and representational challenges particularly as higher levels of spatial resolution are available and expected by users. This paper presents a framework for integrating the imperfect answers of our differing sensors and data sources into a powerful representation of the complex urban environment. A case study is presented involving the integration of temporally diverse 2D, 2.5D and 3D spatial data sources over Kandahar, Afghanistan. In this case study we present a methodology for validating and augmenting 2D/2.5D urban feature and attribute data with LIDAR to produce validated 3D objects. We demonstrate that nearly 15% of buildings in Kandahar require understanding nearby vegetation before 3-D validation can be successful. We also address urban temporal change detection at the object level. Finally we address issues involved with increased sampling resolution since urban features are rarely simple cubes but in the case of Kandahar involve balconies, TV dishes, rooftop walls, small rooms, and domes among other things.

  10. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  11. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  12. Airborne lidar intensity calibration and application for land use classification

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wang, Cheng; Luo, She-Zhou; Zuo, Zheng-Li

    2014-11-01

    Airborne Light Detection and Ranging (LiDAR) is an active remote sensing technology which can acquire the topographic information efficiently. It can record the accurate 3D coordinates of the targets and also the signal intensity (the amplitude of backscattered echoes) which represents reflectance characteristics of targets. The intensity data has been used in land use classification, vegetation fractional cover and leaf area index (LAI) estimation. Apart from the reflectance characteristics of the targets, the intensity data can also be influenced by many other factors, such as flying height, incident angle, atmospheric attenuation, laser pulse power and laser beam width. It is therefore necessary to calibrate intensity values before further applications. In this study, we analyze the factors affecting LiDAR intensity based on radar range equation firstly, and then applying the intensity calibration method, which includes the sensor-to-target distance and incident angle, to the laser intensity data over the study area. Finally the raw LiDAR intensity and normalized intensity data are used for land use classification along with LiDAR elevation data respectively. The results show that the classification accuracy from the normalized intensity data is higher than that from raw LiDAR intensity data and also indicate that the calibration of LiDAR intensity data is necessary in the application of land use classification.

  13. Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors

    PubMed Central

    Alhashimi, Anas; Varagnolo, Damiano; Gustafsson, Thomas

    2015-01-01

    We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser’s datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three. PMID:26690445

  14. The Windvan pulsed CO2 Doppler lidar wide-area wind sensor

    NASA Technical Reports Server (NTRS)

    Lawrence, Rhidian

    1990-01-01

    Wind sensing using a Doppler lidar is achieved by sensing the Doppler content of narrow frequency laser light backscattered by the ambient atmospheric aerosols. The derived radial wind components along several directions are used to generate wind vectors, typically using the Velocity Azimuth Display (VAD) method described below. Range resolved information is obtained by range gating the continuous scattered return. For a CO2 laser (10.6 mu) the Doppler velocity scaling factor is 188 kHz/ms(exp -1). In the VAD scan method the zenith angle of the pointing direction is fixed and its azimuth is continuously varied through 2 pi. A spatially uniform wind field at a particular altitude yields a sinusoidal variation of the radial component vs. azimuth. The amplitude, phase and dc component of this sinusoid yield the horizontal wind speed, direction and vertical component of the wind respectively. In a nonuniform wind field the Fourier components of the variation yields the required information.

  15. Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors.

    PubMed

    Alhashimi, Anas; Varagnolo, Damiano; Gustafsson, Thomas

    2015-01-01

    We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser's datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three. PMID:26690445

  16. Flash LIDAR Emulator for HIL Simulation

    NASA Technical Reports Server (NTRS)

    Brewster, Paul F.

    2010-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project is building a system for detecting hazards and automatically landing controlled vehicles safely anywhere on the Moon. The Flash Light Detection And Ranging (LIDAR) sensor is used to create on-the-fly a 3D map of the unknown terrain for hazard detection. As part of the ALHAT project, a hardware-in-the-loop (HIL) simulation testbed was developed to test the data processing, guidance, and navigation algorithms in real-time to prove their feasibility for flight. Replacing the Flash LIDAR camera with an emulator in the testbed provided a cheaper, safer, more feasible way to test the algorithms in a controlled environment. This emulator must have the same hardware interfaces as the LIDAR camera, have the same performance characteristics, and produce images similar in quality to the camera. This presentation describes the issues involved and the techniques used to create a real-time flash LIDAR emulator to support HIL simulation.

  17. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  18. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  19. a Multi-Data Source and Multi-Sensor Approach for the 3d Reconstruction and Visualization of a Complex Archaelogical Site: the Case Study of Tolmo de Minateda

    NASA Astrophysics Data System (ADS)

    Torres-Martínez, J. A.; Seddaiu, M.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; González-Aguilera, D.

    2015-02-01

    The complexity of archaeological sites hinders to get an integral modelling using the actual Geomatic techniques (i.e. aerial, closerange photogrammetry and terrestrial laser scanner) individually, so a multi-sensor approach is proposed as the best solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial dataset must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. Last but not least, safeguarding of tangible archaeological heritage and its associated intangible expressions entails a multi-source data approach in which heterogeneous material (historical documents, drawings, archaeological techniques, habit of living, etc.) should be collected and combined with the resulting hybrid 3D of "Tolmo de Minateda" located models. The proposed multi-data source and multi-sensor approach is applied to the study case of "Tolmo de Minateda" archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. In addition, the own defensive nature of the site (i.e. with the presence of three different defensive walls) together with the considerable stratification of the archaeological site (i.e. with different archaeological surfaces and constructive typologies) require that tangible and intangible archaeological heritage expressions can be integrated with the hybrid 3D models obtained, to analyse, understand and exploit the archaeological site by different experts and heritage stakeholders.

  20. Automatic registration of UAV-borne sequent images and LiDAR data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Chen, Chi

    2015-03-01

    Use of direct geo-referencing data leads to registration failure between sequent images and LiDAR data captured by mini-UAV platforms because of low-cost sensors. This paper therefore proposes a novel automatic registration method for sequent images and LiDAR data captured by mini-UAVs. First, the proposed method extracts building outlines from LiDAR data and images and estimates the exterior orientation parameters (EoPs) of the images with building objects in the LiDAR data coordinate framework based on corresponding corner points derived indirectly by using linear features. Second, the EoPs of the sequent images in the image coordinate framework are recovered using a structure from motion (SfM) technique, and the transformation matrices between the LiDAR coordinate and image coordinate frameworks are calculated using corresponding EoPs, resulting in a coarse registration between the images and the LiDAR data. Finally, 3D points are generated from sequent images by multi-view stereo (MVS) algorithms. Then the EoPs of the sequent images are further refined by registering the LiDAR data and the 3D points using an iterative closest-point (ICP) algorithm with the initial results from coarse registration, resulting in a fine registration between sequent images and LiDAR data. Experiments were performed to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision robust co-registration of sequent images and LiDAR data captured by mini-UAVs.

  1. Highly-Sensitive Surface-Enhanced Raman Spectroscopy (SERS)-based Chemical Sensor using 3D Graphene Foam Decorated with Silver Nanoparticles as SERS substrate

    PubMed Central

    Srichan, Chavis; Ekpanyapong, Mongkol; Horprathum, Mati; Eiamchai, Pitak; Nuntawong, Noppadon; Phokharatkul, Ditsayut; Danvirutai, Pobporn; Bohez, Erik; Wisitsoraat, Anurat; Tuantranont, Adisorn

    2016-01-01

    In this work, a novel platform for surface-enhanced Raman spectroscopy (SERS)-based chemical sensors utilizing three-dimensional microporous graphene foam (GF) decorated with silver nanoparticles (AgNPs) is developed and applied for methylene blue (MB) detection. The results demonstrate that silver nanoparticles significantly enhance cascaded amplification of SERS effect on multilayer graphene foam (GF). The enhancement factor of AgNPs/GF sensor is found to be four orders of magnitude larger than that of AgNPs/Si substrate. In addition, the sensitivity of the sensor could be tuned by controlling the size of silver nanoparticles. The highest SERS enhancement factor of ∼5 × 104 is achieved at the optimal nanoparticle size of 50 nm. Moreover, the sensor is capable of detecting MB over broad concentration ranges from 1 nM to 100 μM. Therefore, AgNPs/GF is a highly promising SERS substrate for detection of chemical substances with ultra-low concentrations. PMID:27020705

  2. Highly-Sensitive Surface-Enhanced Raman Spectroscopy (SERS)-based Chemical Sensor using 3D Graphene Foam Decorated with Silver Nanoparticles as SERS substrate

    NASA Astrophysics Data System (ADS)

    Srichan, Chavis; Ekpanyapong, Mongkol; Horprathum, Mati; Eiamchai, Pitak; Nuntawong, Noppadon; Phokharatkul, Ditsayut; Danvirutai, Pobporn; Bohez, Erik; Wisitsoraat, Anurat; Tuantranont, Adisorn

    2016-03-01

    In this work, a novel platform for surface-enhanced Raman spectroscopy (SERS)-based chemical sensors utilizing three-dimensional microporous graphene foam (GF) decorated with silver nanoparticles (AgNPs) is developed and applied for methylene blue (MB) detection. The results demonstrate that silver nanoparticles significantly enhance cascaded amplification of SERS effect on multilayer graphene foam (GF). The enhancement factor of AgNPs/GF sensor is found to be four orders of magnitude larger than that of AgNPs/Si substrate. In addition, the sensitivity of the sensor could be tuned by controlling the size of silver nanoparticles. The highest SERS enhancement factor of ∼5 × 104 is achieved at the optimal nanoparticle size of 50 nm. Moreover, the sensor is capable of detecting MB over broad concentration ranges from 1 nM to 100 μM. Therefore, AgNPs/GF is a highly promising SERS substrate for detection of chemical substances with ultra-low concentrations.

  3. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  4. High-performance, mechanically flexible, and vertically integrated 3D carbon nanotube and InGaZnO complementary circuits with a temperature sensor.

    PubMed

    Honda, Wataru; Harada, Shingo; Ishida, Shohei; Arie, Takayuki; Akita, Seiji; Takei, Kuniharu

    2015-08-26

    A vertically integrated inorganic-based flexible complementary metal-oxide-semiconductor (CMOS) inverter with a temperature sensor with a high inverter gain of ≈50 and a low power consumption of <7 nW mm(-1) is demonstrated using a layer-by-layer assembly process. In addition, the negligible influence of the mechanical flexibility on the performance of the CMOS inverter and the temperature dependence of the CMOS inverter characteristics are discussed. PMID:26177598

  5. Imaging Flash Lidar for Safe Landing on Solar System Bodies and Spacecraft Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Bulyshev, Alexander E.; Brewster, Paul F.; Carrion, William A; Pierrottet, Diego F.; Hines, Glenn D.; Petway, Larry B.; Barnes, Bruce W.; Noe, Anna M.

    2015-01-01

    NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of the landing from about 1 kilometer to 500 meters above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16,000 pixels range images with 7 centimeters precision, at 20 Hertz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications.

  6. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  7. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  8. Lidar Analyses

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1995-01-01

    A brief description of enhancements made to the NASA MSFC coherent lidar model is provided. Notable improvements are the addition of routines to automatically determine the 3 dB misalignment loss angle and the backscatter value at which the probability of a good estimate (for a maximum likelihood estimator) falls to 50%. The ability to automatically generate energy/aperture parametrization (EAP) plots which include the effects of angular misalignment has been added. These EAP plots make it very easy to see that for any practical system where there is some degree of misalignment then there is an optimum telescope diameter for which the laser pulse energy required to achieve a particular sensitivity is minimized. Increasing the telescope diameter above this will result in a reduction of sensitivity. These parameterizations also clearly show that the alignment tolerances at shorter wavelengths are much stricter than those at longer wavelengths. A brief outline of the NASA MSFC AEOLUS program is given and a summary of the lidar designs considered during the program is presented. A discussion of some of the design trades is performed both in the text and in a conference publication attached as an appendix.

  9. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  10. Wind Field Measurements With Airborne Doppler Lidar

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.

    1999-01-01

    In collaboration with lidar atmospheric remote sensing groups at NASA Marshall Space Flight Center and National Oceanic and Atmospheric Administration (NOAA) Environmental Technology Laboratory, we have developed and flown the Multi-center Airborne Coherent Atmospheric Wind Sensor (MACAWS) lidar on the NASA DC-8 research aircraft. The scientific motivations for this effort are: to obtain measurements of subgrid scale (i.e. 2-200 km) processes and features which may be used to improve parameterizations in global/regional-scale models; to improve understanding and predictive capabilities on the mesoscale; and to assess the performance of Earth-orbiting Doppler lidar for global tropospheric wind measurements. MACAWS is a scanning Doppler lidar using a pulsed transmitter and coherent detection; the use of the scanner allows 3-D wind fields to be produced from the data. The instrument can also be radiometrically calibrated and used to study aerosol, cloud, and surface scattering characteristics at the lidar wavelength in the thermal infrared. MACAWS was used to study surface winds off the California coast near Point Arena, with an example depicted in the figure below. The northerly flow here is due to the Pacific subtropical high. The coastal topography interacts with the northerly flow in the marine inversion layer, and when the flow passes a cape or point that juts into the winds, structures called "hydraulic expansion fans" are observed. These are marked by strong variation along the vertical and cross-shore directions. The plots below show three horizontal slices at different heights above sea level (ASL). Bottom plots are enlargements of the area marked by dotted boxes above. The terrain contours are in 200-m increments, with the white spots being above 600-m elevation. Additional information is contained in the original.

  11. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  12. A 3D analysis algorithm to improve interpretation of heat pulse sensor results for the determination of small-scale flow directions and velocities in the hyporheic zone

    NASA Astrophysics Data System (ADS)

    Angermann, Lisa; Lewandowski, Jörg; Fleckenstein, Jan H.; Nützmann, Gunnar

    2012-12-01

    The hyporheic zone is strongly influenced by the adjacent surface water and groundwater systems. It is subject to hydraulic head and pressure fluctuations at different space and time scales, causing dynamic and heterogeneous flow patterns. These patterns are crucial for many biogeochemical processes in the shallow sediment and need to be considered in investigations of this hydraulically dynamic and biogeochemical active interface. For this purpose a device employing heat as an artificial tracer and a data analysis routine were developed. The method aims at measuring hyporheic flow direction and velocity in three dimensions at a scale of a few centimeters. A short heat pulse is injected into the sediment by a point source and its propagation is detected by up to 24 temperature sensors arranged cylindrically around the heater. The resulting breakthrough curves are analyzed using an analytical solution of the heat transport equation. The device was tested in two laboratory flow-through tanks with defined flow velocities and directions. Using different flow situations and sensor arrays the sensitivity of the method was evaluated. After operational reliability was demonstrated in the laboratory, its applicability in the field was tested in the hyporheic zone of a low gradient stream with sandy streambed in NE-Germany. Median and maximum flow velocity in the hyporheic zone at the site were determined as 0.9 × 10-4 and 2.1 × 10-4 m s-1 respectively. Horizontal flow components were found to be spatially very heterogeneous, while vertical flow component appear to be predominantly driven by the streambed morphology.

  13. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  14. Reconstruction of 3D tree stem models from low-cost terrestrial laser scanner data

    NASA Astrophysics Data System (ADS)

    Kelbe, Dave; Romanczyk, Paul; van Aardt, Jan; Cawse-Nicholson, Kerry

    2013-05-01

    With the development of increasingly advanced airborne sensing systems, there is a growing need to support sensor system design, modeling, and product-algorithm development with explicit 3D structural ground truth commensurate to the scale of acquisition. Terrestrial laser scanning is one such technique which could provide this structural information. Commercial instrumentation to suit this purpose has existed for some time now, but cost can be a prohibitive barrier for some applications. As such we recently developed a unique laser scanning system from readily-available components, supporting low cost, highly portable, and rapid measurement of below-canopy 3D forest structure. Tools were developed to automatically reconstruct tree stem models as an initial step towards virtual forest scene generation. The objective of this paper is to assess the potential of this hardware/algorithm suite to reconstruct 3D stem information for a single scan of a New England hardwood forest site. Detailed tree stem structure (e.g., taper, sweep, and lean) is recovered for trees of varying diameter, species, and range from the sensor. Absolute stem diameter retrieval accuracy is 12.5%, with a 4.5% overestimation bias likely due to the LiDAR beam divergence.

  15. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  16. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  17. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J., Jr.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J., Jr.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  20. On-site sensor recalibration of a spinning multi-beam LiDAR system using automatically-detected planar targets.

    PubMed

    Chen, Chia-Yen; Chien, Hsiang-Jen

    2012-01-01

    This paper presents a fully-automated method to establish a calibration dataset from on-site scans and recalibrate the intrinsic parameters of a spinning multi-beam 3-D scanner. The proposed method has been tested on a Velodyne HDL-64E S2 LiDAR system, which contains 64 rotating laser rangefinders. By time series analysis, we found that the collected range data have random measurement errors of around ±25 mm. In addition, the layered misalignment of scans among the rangefinders, which is identified as a systematic error, also increases the difficulty to accurately locate planar surfaces. We propose a temporal-spatial range data fusion algorithm, along with a robust RANSAC-based plane detection algorithm to address these issues. Furthermore, we formulate an alternative geometric interpretation of sensory data using linear parameters, which is advantageous for the calibration procedure. The linear representation allows the proposed method to be generalized to any LiDAR system that follows the rotating beam model. We also confirmed in this paper, that given effective calibration datasets, the pre-calibrated factory parameters can be further tuned to achieve significantly improved performance. After the optimization, the systematic error is noticeable lowered, and evaluation shows that the recalibrated parameters outperform the factory parameters with the RMS planar errors reduced by up to 49%. PMID:23202019

  1. On-Site Sensor Recalibration of a Spinning Multi-Beam LiDAR System Using Automatically-Detected Planar Targets

    PubMed Central

    Chen, Chia-Yen; Chien, Hsiang-Jen

    2012-01-01

    This paper presents a fully-automated method to establish a calibration dataset from on-site scans and recalibrate the intrinsic parameters of a spinning multi-beam 3-D scanner. The proposed method has been tested on a Velodyne HDL-64E S2 LiDAR system, which contains 64 rotating laser rangefinders. By time series analysis, we found that the collected range data have random measurement errors of around ±25 mm. In addition, the layered misalignment of scans among the rangefinders, which is identified as a systematic error, also increases the difficulty to accurately locate planar surfaces. We propose a temporal-spatial range data fusion algorithm, along with a robust RANSAC-based plane detection algorithm to address these issues. Furthermore, we formulate an alternative geometric interpretation of sensory data using linear parameters, which is advantageous for the calibration procedure. The linear representation allows the proposed method to be generalized to any LiDAR system that follows the rotating beam model. We also confirmed in this paper, that given effective calibration datasets, the pre-calibrated factory parameters can be further tuned to achieve significantly improved performance. After the optimization, the systematic error is noticeable lowered, and evaluation shows that the recalibrated parameters outperform the factory parameters with the RMS planar errors reduced by up to 49%. PMID:23202019

  2. The Use of a Lidar Forward-Looking Turbulence Sensor for Mixed-Compression Inlet Unstart Avoidance and Gross Weight Reduction on a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Soreide, David; Bogue, Rodney K.; Ehernberger, L. J.; Seidel, Jonathan

    1997-01-01

    Inlet unstart causes a disturbance akin to severe turbulence for a supersonic commercial airplane. Consequently, the current goal for the frequency of unstarts is a few times per fleet lifetime. For a mixed-compression inlet, there is a tradeoff between propulsion system efficiency and unstart margin. As the unstart margin decreases, propulsion system efficiency increases, but so does the unstart rate. This paper intends to first, quantify that tradeoff for the High Speed Civil Transport (HSCT) and second, to examine the benefits of using a sensor to detect turbulence ahead of the airplane. When the presence of turbulence is known with sufficient lead time to allow the propulsion system to adjust the unstart margin, then inlet un,starts can be minimized while overall efficiency is maximized. The NASA Airborne Coherent Lidar for Advanced In-Flight Measurements program is developing a lidar system to serve as a prototype of the forward-looking sensor. This paper reports on the progress of this development program and its application to the prevention of inlet unstart in a mixed-compression supersonic inlet. Quantified benefits include significantly reduced takeoff gross weight (TOGW), which could increase payload, reduce direct operating costs, or increase range for the HSCT.

  3. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  4. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  5. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  6. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  7. 3-D MAPPING TECHNOLOGIES FOR HIGH LEVEL WASTE TANKS

    SciTech Connect

    Marzolf, A.; Folsom, M.

    2010-08-31

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  8. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  9. 3D Technology for intelligent trackers

    SciTech Connect

    Lipton, Ronald; /Fermilab

    2010-09-01

    At Super-LHC luminosity it is expected that the standard suite of level 1 triggers for CMS will saturate. Information from the tracker will be needed to reduce trigger rates to satisfy the level 1 bandwidth. Tracking trigger modules which correlate information from closely-spaced sensor layers to form an on-detector momentum filter are being developed by several groups. We report on a trigger module design which utilizes three dimensional integrated circuit technology incorporating chips which are connected both to the top and bottom sensor, providing the ability to filter information locally. A demonstration chip, the VICTR, has been submitted to the Chartered/Tezzaron two-tier 3D run coordinated by Fermilab. We report on the 3D design concept, the status of the VICTR chip and associated sensor integration utilizing oxide bonding.

  10. Diode laser lidar wind velocity sensor using a liquid-crystal retarder for non-mechanical beam-steering.

    PubMed

    Rodrigo, Peter John; Iversen, Theis F Q; Hu, Qi; Pedersen, Christian

    2014-11-01

    We extend the functionality of a low-cost CW diode laser coherent lidar from radial wind speed (scalar) sensing to wind velocity (vector) measurements. Both speed and horizontal direction of the wind at ~80 m remote distance are derived from two successive radial speed estimates by alternately steering the lidar probe beam in two different lines-of-sight (LOS) with a 60° angular separation. Dual-LOS beam-steering is implemented optically with no moving parts by means of a controllable liquid-crystal retarder (LCR). The LCR switches the polarization between two orthogonal linear states of the lidar beam so it either transmits through or reflects off a polarization splitter. The room-temperature switching time between the two LOS is measured to be in the order of 100 μs in one switch direction but 16 ms in the opposite transition. Radial wind speed measurement (at 33 Hz rate) while the lidar beam is repeatedly steered from one LOS to the other every half a second is experimentally demonstrated - resulting in 1 Hz rate estimates of wind velocity magnitude and direction at better than 0.1 m/s and 1° resolution, respectively. PMID:25401817

  11. Multi-sensor Calibration and Validation of the NASA-ALVICE and UWO-PCL NDACC Water Vapour Lidars

    NASA Astrophysics Data System (ADS)

    Wing, R.; Sica, R. J.; Argall, S.; Whiteman, D.; Walker, M.; Rodrigues, P.; McCullough, E. M.; Cadriola, M.

    2012-12-01

    The Purple Crow Lidar (PCL) has recently participated in a water vapour validation campaign with the NASA/GSFC Atmospheric Laboratory for Validation Inter-agency Collaboration and Education (ALVICE) Lidar. The purpose of this calibration exercise is to ensure that water vapour measurements, submitted to the Network for the Detection of Atmospheric Composition Change (NDACC) data base, are of sufficient quality for use in detecting long term changes in water vapour mixing ratio, particularly in the upper troposphere and lower stratosphere (UTLS). The field campaign took place at the University of Western Ontario Environmental Research Field Station, near London, Ontario, Canada, from May 23rd to June 10th 2012 and resulted in 57 hours of measurements taken over 12 clear nights. On each night a minimum of one RS92 radiosonde was launched. In addition, 3 cryogenic frost-point hygrometer (CFH) sondes were launched on clear nights over the course of the campaign. Measurements were obtained from near the surface up to ~20 km by both lidar systems, the radiosondes, and the CFH balloons. These measurements will be used to calibrate profiles of water vapour mixing ratio by the newly relocated PCL. Initial comparisons between the sondes and lidars will be presented as well as derived corrections for the retrieval of water vapour mixing ratio in both the troposphere and lower stratosphere.

  12. Velodyne HDL-64E lidar for unmanned surface vehicle obstacle detection

    NASA Astrophysics Data System (ADS)

    Halterman, Ryan; Bruch, Michael

    2010-04-01

    The Velodyne HDL-64E is a 64 laser 3D (360×26.8 degree) scanning LIDAR. It was designed to fill perception needs of DARPA Urban Challenge vehicles. As such, it was principally intended for ground use. This paper presents the performance of the HDL-64E as it relates to the marine environment for unmanned surface vehicle (USV) obstacle detection and avoidance. We describe the sensor's capacity for discerning relevant objects at sea- both through subjective observations of the raw data and through a rudimentary automated obstacle detection algorithm. We also discuss some of the complications that have arisen with the sensor.

  13. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  14. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  15. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  16. In situ correlative measurements for the ultraviolet differential absorption lidar and the high spectral resolution lidar air quality remote sensors: 1980 PEPE/NEROS program

    NASA Technical Reports Server (NTRS)

    Gregory, G. L.; Beck, S. M.; Mathis, J. J., Jr.

    1981-01-01

    In situ correlative measurements were obtained with a NASA aircraft in support of two NASA airborne remote sensors participating in the Environmental Protection Agency's 1980persistent elevated pollution episode (PEPE) and Northeast regional oxidant study (NEROS) field program in order to provide data for evaluating the capability of two remote sensors for measuring mixing layer height, and ozone and aerosol concentrations in the troposphere during the 1980 PEPE/NEROS program. The in situ aircraft was instrumented to measure temperature, dewpoint temperature, ozone concentrations, and light scattering coefficient. In situ measurements for ten correlative missions are given and discussed. Each data set is presented in graphical and tabular format aircraft flight plans are included.

  17. 3-D laser radar simulation for autonomous spacecraft landing

    NASA Technical Reports Server (NTRS)

    Reiley, Michael F.; Carmer, Dwayne C.; Pont, W. F.

    1991-01-01

    A sophisticated 3D laser radar sensor simulation, developed and applied to the task of autonomous hazard detection and avoidance, is presented. This simulation includes a backward ray trace to sensor subpixels, incoherent subpixel integration, range dependent noise, sensor point spread function effects, digitization noise, and AM-CW modulation. Specific sensor parameters, spacecraft lander trajectory, and terrain type have been selected to generate simulated sensor data.

  18. Oceanic Lidar

    NASA Technical Reports Server (NTRS)

    Carder, K. L. (Editor)

    1981-01-01

    Instrument concepts which measure ocean temperature, chlorophyll, sediment and Gelbstoffe concentrations in three dimensions on a quantitative, quasi-synoptic basis were considered. Coastal zone color scanner chlorophyll imagery, laser stimulated Raman temperaure and fluorescence spectroscopy, existing airborne Lidar and laser fluorosensing instruments, and their accuracies in quantifying concentrations of chlorophyll, suspended sediments and Gelbstoffe are presented. Lidar applications to phytoplankton dynamics and photochemistry, Lidar radiative transfer and signal interpretation, and Lidar technology are discussed.

  19. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  20. Automatic 3d Building Reconstruction from a Dense Image Matching Dataset

    NASA Astrophysics Data System (ADS)

    McClune, Andrew P.; Mills, Jon P.; Miller, Pauline E.; Holland, David A.

    2016-06-01

    Over the last 20 years the demand for three dimensional (3D) building models has resulted in a vast amount of research being conducted in attempts to automate the extraction and reconstruction of models from airborne sensors. Recent results have shown that current methods tend to favour planar fitting procedures from lidar data, which are able to successfully reconstruct simple roof structures automatically but fail to reconstruct more complex structures or roofs with small artefacts. Current methods have also not fully explored the potential of recent developments in digital photogrammetry. Large format digital aerial cameras can now capture imagery with increased overlap and a higher spatial resolution, increasing the number of pixel correspondences between images. Every pixel in each stereo pair can also now be matched using per-pixel algorithms, which has given rise to the approach known as dense image matching. This paper presents an approach to 3D building reconstruction to try and overcome some of the limitations of planar fitting procedures. Roof vertices, extracted from true-orthophotos using edge detection, are refined and converted to roof corner points. By determining the connection between extracted corner points, a roof plane can be defined as a closed-cycle of points. Presented results demonstrate the potential of this method for the reconstruction of complex 3D building models at CityGML LoD2 specification.

  1. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  2. Fusion of multisensor passive and active 3D imagery

    NASA Astrophysics Data System (ADS)

    Fay, David A.; Verly, Jacques G.; Braun, Michael I.; Frost, Carl E.; Racamato, Joseph P.; Waxman, Allen M.

    2001-08-01

    We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.

  3. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters

  4. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  6. Lidar instruments proposed for Eos

    NASA Technical Reports Server (NTRS)

    Grant, William B.; Browell, Edward V.

    1990-01-01

    Lidar, an acronym for light detection and ranging, represents a class of instruments that utilize lasers to send probe beams into the atmosphere or onto the surface of the Earth and detect the backscattered return in order to measure properties of the atmosphere or surface. The associated technology has matured to the point where two lidar facilities, Geodynamics Laser Ranging System (GLRS), and Laser Atmospheric Wind Sensor (LAWS) were accepted for Phase 2 studies for Eos. A third lidar facility Laser Atmospheric Sounder and Altimeter (LASA), with the lidar experiment EAGLE (Eos Atmospheric Global Lidar Experiment) was proposed for Eos. The generic lidar system has a number of components. They include controlling electronics, laser transmitters, collimating optics, a receiving telescope, spectral filters, detectors, signal chain electronics, and a data system. Lidar systems that measure atmospheric constituents or meteorological parameters record the signal versus time as the beam propagates through the atmosphere. The backscatter arises from molecular (Rayleigh) and aerosol (Mie) scattering, while attenuation arises from molecular and aerosol scattering and absorption. Lidar systems that measure distance to the Earth's surface or retroreflectors in a ranging mode record signals with high temporal resolution over a short time period. The overall characteristics and measurements objectives of the three lidar systems proposed for Eos are given.

  7. Optical-microphysical properties of Saharan dust aerosols and composition relationship using a multi-wavelength Raman lidar, in situ sensors and modelling: a case study analysis

    NASA Astrophysics Data System (ADS)

    Papayannis, A.; Mamouri, R. E.; Amiridis, V.; Remoundaki, E.; Tsaknakis, G.; Kokkalis, P.; Veselovskii, I.; Kolgotin, A.; Nenes, A.; Fountoukis, C.

    2012-05-01

    A strong Saharan dust event that occurred over the city of Athens, Greece (37.9° N, 23.6° E) between 27 March and 3 April 2009 was followed by a synergy of three instruments: a 6-wavelength Raman lidar, a CIMEL sun-sky radiometer and the MODIS sensor. The BSC-DREAM model was used to forecast the dust event and to simulate the vertical profiles of the aerosol concentration. Due to mixture of dust particles with low clouds during most of the reported period, the dust event could be followed by the lidar only during the cloud-free day of 2 April 2009. The lidar data obtained were used to retrieve the vertical profile of the optical (extinction and backscatter coefficients) properties of aerosols in the troposphere. The aerosol optical depth (AOD) values derived from the CIMEL ranged from 0.33-0.91 (355 nm) to 0.18-0.60 (532 nm), while the lidar ratio (LR) values retrieved from the Raman lidar ranged within 75-100 sr (355 nm) and 45-75 sr (532 nm). Inside a selected dust layer region, between 1.8 and 3.5 km height, mean LR values were 83 ± 7 and 54 ± 7 sr, at 355 and 532 nm, respectively, while the Ångström-backscatter-related (ABR355/532) and Ångström-extinction-related (AER355/532) were found larger than 1 (1.17 ± 0.08 and 1.11 ± 0.02, respectively), indicating mixing of dust with other particles. Additionally, a retrieval technique representing dust as a mixture of spheres and spheroids was used to derive the mean aerosol microphysical properties (mean and effective radius, number, surface and volume density, and mean refractive index) inside the selected atmospheric layers. Thus, the mean value of the retrieved refractive index was found to be 1.49( ± 0.10) + 0.007( ± 0.007)i, and that of the effective radiuses was 0.30 ± 0.18 μm. The final data set of the aerosol optical and microphysical properties along with the water vapor profiles obtained by Raman lidar were incorporated into the ISORROPIA II model to provide a possible aerosol composition

  8. Lidar Report

    SciTech Connect

    Wollpert.

    2009-04-01

    This report provides an overview of the LiDAR acquisition methodology employed by Woolpert on the 2009 USDA - Savannah River LiDAR Site Project. LiDAR system parameters and flight and equipment information is also included. The LiDAR data acquisition was executed in ten sessions from February 21 through final reflights on March 2, 2009; using two Leica ALS50-II 150kHz Multi-pulse enabled LiDAR Systems. Specific details about the ALS50-II systems are included in Section 4 of this report.

  9. A novel 3D Cu(I) coordination polymer based on Cu6Br2 and Cu2(CN)2 SBUs: in situ ligand formation and use as a naked-eye colorimetric sensor for NB and 2-NT.

    PubMed

    Song, Jiang-Feng; Li, Yang; Zhou, Rui-Sha; Hu, Tuo-Ping; Wen, Yan-Liang; Shao, Jia; Cui, Xiao-Bing

    2016-01-14

    A novel coordination polymer with the chemical formula [Cu4Br(CN)(mtz)2]n (mtz = 5-methyl tetrazole) (), has been synthesized under solvothermal conditions and characterized by elemental analysis, infrared (IR) spectroscopy, thermal gravimetric analysis, powder X-ray diffraction and single-crystal X-ray diffraction. Interestingly, the Cu(i), CN(-) and mtz(-) in compound are all generated from an in situ translation of the original precursors: Cu(2+), acetonitrile and 1-methyl-5-mercapto-1,2,3,4-tetrazole (Hmnt). The in situ ring-to-ring conversion of Hmnt into mtz(-) was found for the first time. Structural analysis reveals that compound is a novel 3D tetrazole-based Cu(i) coordination polymer, containing both metal halide cluster Cu6Br2 and metal pseudohalide cluster Cu2(CN)2 secondary building units (SBUs), which shows an unprecedented (3,6,10)-connected topology. Notably, a pseudo-porphyrin structure with 16-membered rings constructed by four mtz(-) anions and four copper(i) ions was observed in compound . The fluorescence properties of compound were investigated in the solid state and in various solvent emulsions, the results show that compound is a highly sensitive naked-eye colorimetric sensor for NB and 2-NT (NB = nitrobenzene and 2-NT = 2-nitrotoluene). PMID:26600452

  10. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  11. 3D IC for Future HEP Detectors

    SciTech Connect

    Thom, J.; Lipton, R.; Heintz, U.; Johnson, M.; Narain, M.; Badman, R.; Spiegel, L.; Triphati, M.; Deptuch, G.; Kenney, C.; Parker, S.; Ye, Z.; Siddons, D.

    2014-11-07

    Three dimensional integrated circuit technologies offer the possibility of fabricating large area arrays of sensors integrated with complex electronics with minimal dead area, which makes them ideally suited for applications at the LHC upgraded detectors and other future detectors. Here we describe ongoing R&D efforts to demonstrate functionality of components of such detectors. This also includes the study of integrated 3D electronics with active edge sensors to produce "active tiles" which can be tested and assembled into arrays of arbitrary size with high yield.

  12. 3D IC for future HEP detectors

    NASA Astrophysics Data System (ADS)

    Thom, J.; Lipton, R.; Heintz, U.; Johnson, M.; Narain, M.; Badman, R.; Spiegel, L.; Triphati, M.; Deptuch, G.; Kenney, C.; Parker, S.; Ye, Z.; Siddons, D. P.

    2014-11-01

    Three dimensional integrated circuit technologies offer the possibility of fabricating large area arrays of sensors integrated with complex electronics with minimal dead area, which makes them ideally suited for applications at the LHC upgraded detectors and other future detectors. We describe ongoing R&D efforts to demonstrate functionality of components of such detectors. This includes the study of integrated 3D electronics with active edge sensors to produce "active tiles" which can be tested and assembled into arrays of arbitrary size with high yield.

  13. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  14. Rapid high-fidelity visualisation of multispectral 3D mapping

    NASA Astrophysics Data System (ADS)

    Tudor, Philip M.; Christy, Mark

    2011-06-01

    Mobile LIDAR scanning typically provides captured 3D data in the form of 3D 'Point Clouds'. Combined with colour imagery these data produce coloured point clouds or, if further processed, polygon-based 3D models. The use of point clouds is simple and rapid, but visualisation can appear ghostly and diffuse. Textured 3D models provide high fidelity visualisation, but their creation is time consuming, difficult to automate and can modify key terrain details. This paper describes techniques for the visualisation of fused multispectral 3D data that approach the visual fidelity of polygon-based models with the rapid turnaround and detail of 3D point clouds. The general approaches to data capture and data fusion are identified as well as the central underlying mathematical transforms, data management and graphics processing techniques used to support rapid, interactive visualisation of very large multispectral 3D datasets. Performance data with respect to real-world 3D mapping as well as illustrations of visualisation outputs are included.

  15. STELLOPT Modeling of the 3D Diagnostic Response in ITER

    SciTech Connect

    Lazerson, Samuel A

    2013-05-07

    The ITER three dimensional diagnostic response to an n=3 resonant magnetic perturbation is modeled using the STELLOPT code. The in-vessel coils apply a resonant magnetic perturbation (RMP) fi eld which generates a 4 cm edge displacement from axisymmetry as modeled by the VMEC 3D equilibrium code. Forward modeling of flux loop and magnetic probe response with the DIAGNO code indicates up to 20 % changes in measured plasma signals. Simulated LIDAR measurements of electron temperature indicate 2 cm shifts on the low field side of the plasma. This suggests that the ITER diagnostic will be able to diagnose the 3D structure of the equilibria.

  16. The use of lidar as optical remote sensors in the assessment of air quality near oil refineries and petrochemical sites

    NASA Astrophysics Data System (ADS)

    Steffens, Juliana; Landulfo, Eduardo; Guardani, Roberto; Oller do Nascimento, Cláudio A.; Moreira, Andréia

    2008-10-01

    Petrochemical and oil refining facilities play an increasingly important role in the industrial context. The corresponding need for monitoring emissions from these facilities as well as in their neighborhood has raised in importance, leading to the present tendency of creating real time data acquisition and analysis systems. The use of LIDAR-based techniques, both for air quality and emissions monitoring purposes is currently being developed for the area of Cubatao, Sao Paulo, one of the largest petrochemical and industrial sites in Brazil. In a partnership with the University of SÃ#o Paulo (USP) the Brazilian oil company PETROBRAS has implemented an Environmental Research Center - CEPEMA - located in the industrial site, in which the development of fieldwork will be carried out. The current joint R&D project focuses on the development of a real time acquisition system, together with automated multicomponent chemical analysis. Additionally, fugitive emissions from oil processing and storage sites will be measured, together with the main greenhouse gases (CO2, CH4), and aerosols. Our first effort is to assess the potential chemical species coming out of an oil refinery site and to verify which LIDAR technique, DIAL, Raman, fluorescence would be most efficient in detecting and quantifying the specific atmospheric emissions.

  17. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  18. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  19. Flexible building primitives for 3D building modeling

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Jancosek, M.; Oude Elberink, S.; Vosselman, G.

    2015-03-01

    3D building models, being the main part of a digital city scene, are essential to all applications related to human activities in urban environments. The development of range sensors and Multi-View Stereo (MVS) technology facilitates our ability to automatically reconstruct level of details 2 (LoD2) models of buildings. However, because of the high complexity of building structures, no fully automatic system is currently available for producing building models. In order to simplify the problem, a lot of research focuses only on particular buildings shapes, and relatively simple ones. In this paper, we analyze the property of topology graphs of object surfaces, and find that roof topology graphs have three basic elements: loose nodes, loose edges, and minimum cycles. These elements have interesting physical meanings: a loose node is a building with one roof face; a loose edge is a ridge line between two roof faces whose end points are not defined by a third roof face; and a minimum cycle represents a roof corner of a building. Building primitives, which introduce building shape knowledge, are defined according to these three basic elements. Then all buildings can be represented by combining such building primitives. The building parts are searched according to the predefined building primitives, reconstructed independently, and grouped into a complete building model in a CSG-style. The shape knowledge is inferred via the building primitives and used as constraints to improve the building models, in which all roof parameters are simultaneously adjusted. Experiments show the flexibility of building primitives in both lidar point cloud and stereo point cloud.

  20. Study of Droplet Activation in Thin Clouds Using Ground-based Raman Lidar and Ancillary Remote Sensors

    NASA Astrophysics Data System (ADS)

    Rosoldi, Marco; Madonna, Fabio; Gumà Claramunt, Pilar; Pappalardo, Gelsomina

    2015-04-01

    Studies on global climate change show that the effects of aerosol-cloud interactions (ACI) on the Earth's radiation balance and climate, also known as indirect aerosol effects, are the most uncertain among all the effects involving the atmospheric constituents and processes (Stocker et al., IPCC, 2013). Droplet activation is the most important and challenging process in the understanding of ACI. It represents the direct microphysical link between aerosols and clouds and it is probably the largest source of uncertainty in estimating indirect aerosol effects. An accurate estimation of aerosol-clouds microphysical and optical properties in proximity and within the cloud boundaries represents a good frame for the study of droplet activation. This can be obtained by using ground-based profiling remote sensing techniques. In this work, a methodology for the experimental investigation of droplet activation, based on ground-based multi-wavelength Raman lidar and Doppler radar technique, is presented. The study is focused on the observation of thin liquid water clouds, which are low or midlevel super-cooled clouds characterized by a liquid water path (LWP) lower than about 100 gm-2(Turner et al., 2007). These clouds are often optically thin, which means that ground-based Raman lidar allows the detection of the cloud top and of the cloud structure above. Broken clouds are primarily inspected to take advantage of their discontinuous structure using ground based remote sensing. Observations are performed simultaneously with multi-wavelength Raman lidars, a cloud Doppler radar and a microwave radiometer at CIAO (CNR-IMAA Atmospheric Observatory: www.ciao.imaa.cnr.it), in Potenza, Southern Italy (40.60N, 15.72E, 760 m a.s.l.). A statistical study of the variability of optical properties and humidity in the transition from cloudy regions to cloud-free regions surrounding the clouds leads to the identification of threshold values for the optical properties, enabling the

  1. Cloud Property Retrieval and 3D Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.

    2003-01-01

    Cloud thickness and photon mean-free-path together determine the scale of "radiative smoothing" of cloud fluxes and radiances. This scale is observed as a change in the spatial spectrum of cloud radiances, and also as the "halo size" seen by off beam lidar such as THOR and WAIL. Such of beam lidar returns are now being used to retrieve cloud layer thickness and vertical scattering extinction profile. We illustrate with recent measurements taken at the Oklahoma ARM site, comparing these to the-dependent 3D simulations. These and other measurements sensitive to 3D transfer in clouds, coupled with Monte Carlo and other 3D transfer methods, are providing a better understanding of the dependence of radiation on cloud inhomogeneity, and to suggest new retrieval algorithms appropriate for inhomogeneous clouds. The international "Intercomparison of 3D Radiation Codes" or I3RC, program is coordinating and evaluating the variety of 3D radiative transfer methods now available, and to make them more widely available. Information is on the Web at: http://i3rc.gsfc.nasa.gov/. Input consists of selected cloud fields derived from data sources such as radar, microwave and satellite, and from models involved in the GEWEX Cloud Systems Studies. Output is selected radiative quantities that characterize the large-scale properties of the fields of radiative fluxes and heating. Several example cloud fields will be used to illustrate. I3RC is currently implementing an "open source" 3d code capable of solving the baseline cases. Maintenance of this effort is one of the goals of a new 3DRT Working Group under the International Radiation Commission. It is hoped that the 3DRT WG will include active participation by land and ocean modelers as well, such as 3D vegetation modelers participating in RAMI.

  2. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  3. SAR and LIDAR fusion: experiments and applications

    NASA Astrophysics Data System (ADS)

    Edwards, Matthew C.; Zaugg, Evan C.; Bradley, Joshua P.; Bowden, Ryan D.

    2013-05-01

    In recent years ARTEMIS, Inc. has developed a series of compact, versatile Synthetic Aperture Radar (SAR) systems which have been operated on a variety of small manned and unmanned aircraft. The multi-frequency-band SlimSAR has demonstrated a variety of capabilities including maritime and littoral target detection, ground moving target indication, polarimetry, interferometry, change detection, and foliage penetration. ARTEMIS also continues to build upon the radar's capabilities through fusion with other sensors, such as electro-optical and infrared camera gimbals and light detection and ranging (LIDAR) devices. In this paper we focus on experiments and applications employing SAR and LIDAR fusion. LIDAR is similar to radar in that it transmits a signal which, after being reflected or scattered by a target area, is recorded by the sensor. The differences are that a LIDAR uses a laser as a transmitter and optical sensors as a receiver, and the wavelengths used exhibit a very different scattering phenomenology than the microwaves used in radar, making SAR and LIDAR good complementary technologies. LIDAR is used in many applications including agriculture, archeology, geo-science, and surveying. Some typical data products include digital elevation maps of a target area and features and shapes extracted from the data. A set of experiments conducted to demonstrate the fusion of SAR and LIDAR data include a LIDAR DEM used in accurately processing the SAR data of a high relief area (mountainous, urban). Also, feature extraction is used in improving geolocation accuracy of the SAR and LIDAR data.

  4. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  5. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  6. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  7. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. Continuous measurements of PM at ground level over an industrial area of Evia (Greece) using synergy of a scanning Lidar system and in situ sensors during TAMEX campaign

    NASA Astrophysics Data System (ADS)

    Georgoussis, G.; Papayannis, A.; Remoudaki, E.; Tsaknakis, G.; Mamouri, R.; Avdikos, G.; Chontidiadis, C.; Kokkalis, P.; Tzezos, M.; Veenstra, M.

    2009-09-01

    During the TAMEX (Tamyneon Air pollution Mini EXperiment) field Campaign, which took place in the industrial site of Aliveri (38o,24'N, 24o 01'E), Evia (Greece) between June 25 and September 25, 2008, continuous measurements of airborne particulate matter (PM) were performed by in situ sensors at ground level. Additional aerosol measurements were performed by a single-wavelength (355 nm) eye-safe scanning lidar, operating in the Range-Height Indicator (RHI) mode between July 22 and 23, 2008. The industrial site of the city of Aliveri is located south-east of the city area at distance of about 2.5 km. The in situ aerosol sampling site was located at the Lykeio area at 62 m above sea level (ASL) and at a distance of 2,8 km from the Public Power Corporation complex area (DEI Corporation) and 3,3 km from a large cement industrial complex owned by Hercules/Lafarge SA Group of Companies (HLGC) and located at Milaki area. According to the European Environment Agency (EEA) report for the year 2004, this industry emits about 302 tons per year of PM10, 967,000 tons of CO2, 16700 tons of SOx and 1410 tons of NOx while the second industrial complex (HLGC) emits about 179 tons per year of PM10, 1890 tons of CO, 1,430,000 tons of CO2, 3510 tons of NOx, 15.4 Kg of cadmium and its compounds, 64.2 kg of mercury and its compounds and 2.2 tons of benzene. The measuring site was equipped with a full meteorological station (Davis Inc., USA), and 3 aerosol samplers: two Dust Track optical sensors from TSI Inc. (USA) and 1 Skypost PM sequential atmospheric particulate matter. The Dust Track sensors monitored the PM10, PM2.5 and PM1.0 concentration levels, with time resolution ranging from 1 to 3 minutes, while a Tecora sensor was taking continuous PM monitoring by the sampling method on 47 mm diameter filter membrane. The analysis of the PM sensors showed that, systematically, during nighttime large quantities of PM2.5 particles were detected (e.g. exceeding 50 ug/m3). During daytime

  11. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  12. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  13. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  14. Finite sampling corrected 3D noise with confidence intervals.

    PubMed

    Haefner, David P; Burks, Stephen D

    2015-05-20

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density for noise in imaging systems known as 3D noise. The goal was to decompose the 3D noise process into spatial and temporal components identify potential sources of origin. To characterize a sensor in terms of its 3D noise values, a finite number of samples in each of the three dimensions (two spatial, one temporal) were performed. In this correspondence, we developed the full sampling corrected 3D noise measurement and the corresponding confidence bounds. The accuracy of these methods was demonstrated through Monte Carlo simulations. Both the sampling correction as well as the confidence intervals can be applied a posteriori to the classic 3D noise calculation. The Matlab functions associated with this work can be found on the Mathworks file exchange ["Finite sampling corrected 3D noise with confidence intervals," https://www.mathworks.com/matlabcentral/fileexchange/49657-finite-sampling-corrected-3d-noise-with-confidence-intervals.]. PMID:26192530

  15. Three-Dimensional Air Quality System (3D-AQS)

    NASA Astrophysics Data System (ADS)

    Engel-Cox, J.; Hoff, R.; Weber, S.; Zhang, H.; Prados, A.

    2007-12-01

    The 3-Dimensional Air Quality System (3DAQS) integrates remote sensing observations from a variety of platforms into air quality decision support systems at the U.S. Environmental Protection Agency (EPA), with a focus on particulate air pollution. The decision support systems are the Air Quality System (AQS) / AirQuest database at EPA, Infusing satellite Data into Environmental Applications (IDEA) system, the U.S. Air Quality weblog (Smog Blog) at UMBC, and the Regional East Atmospheric Lidar Mesonet (REALM). The project includes an end user advisory group with representatives from the air quality community providing ongoing feedback. The 3DAQS data sets are UMBC ground based LIDAR, and NASA and NOAA satellite data from MODIS, OMI, AIRS, CALIPSO, MISR, and GASP. Based on end user input, we are co-locating these measurements to the EPA's ground-based air pollution monitors as well as re-gridding to the Community Multiscale Air Quality (CMAQ) model grid. These data provide forecasters and the scientific community with a tool for assessment, analysis, and forecasting of U.S Air Quality. The third dimension and the ability to analyze the vertical transport of particulate pollution are provided by aerosol extinction profiles from the UMBC LIDAR and CALIPSO. We present examples of a 3D visualization tool we are developing to facilitate use of this data. We also present two specific applications of 3D-AQS data. The first is comparisons between PM2.5 monitor data and remote sensing aerosol optical depth (AOD) data, which show moderate agreement but variation with EPA region. The second is a case study for Baltimore, Maryland, as an example of 3D-analysis for a metropolitan area. In that case, some improvement is found in the PM2.5 /LIDAR correlations when using vertical aerosol information to calculate an AOD below the boundary layer.

  16. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. Rapid topographic and bathymetric reconnaissance using airborne LiDAR

    NASA Astrophysics Data System (ADS)

    Axelsson, Andreas

    2010-10-01

    Today airborne LiDAR (Light Detection And Ranging) systems has gained acceptance as a powerful tool to rapidly collect invaluable information to assess the impact from either natural disasters, such as hurricanes, earthquakes and flooding, or human inflicted disasters such as terrorist/enemy activities. Where satellite based imagery provides an excellent tool to remotely detect changes in the environment, the LiDAR systems, being active remote sensors, provide an unsurpassed method to quantify these changes. The strength of the active laser based systems is especially evident in areas covered by occluding vegetation or in the shallow coastal zone as the laser can penetrate the vegetation or water body to unveil what is below. The purpose of this paper is to address the task to survey complex areas with help of the state-of-the-art airborne LiDAR systems and also discuss scenarios where the method is used today and where it may be used tomorrow. Regardless if it is a post-hurricane survey or a preparation stage for a landing operation in unchartered waters, it is today possible to collect, process and present a dense 3D model of the area of interest within just a few hours from deployment. By utilizing the advancement in processing power and wireless network capabilities real-time presentation would be feasible.

  19. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  20. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  1. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  2. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales

    NASA Astrophysics Data System (ADS)

    Ghosh, Aniruddha; Fassnacht, Fabian Ewald; Joshi, P. K.; Koch, Barbara

    2014-02-01

    Knowledge of tree species distribution is important worldwide for sustainable forest management and resource evaluation. The accuracy and information content of species maps produced using remote sensing images vary with scale, sensor (optical, microwave, LiDAR), classification algorithm, verification design and natural conditions like tree age, forest structure and density. Imaging spectroscopy reduces the inaccuracies making use of the detailed spectral response. However, the scale effect still has a strong influence and cannot be neglected. This study aims to bridge the knowledge gap in understanding the scale effect in imaging spectroscopy when moving from 4 to 30 m pixel size for tree species mapping, keeping in mind that most current and future hyperspectral satellite based sensors work with spatial resolution around 30 m or more. Two airborne (HyMAP) and one spaceborne (Hyperion) imaging spectroscopy dataset with pixel sizes of 4, 8 and 30 m, respectively were available to examine the effect of scale over a central European forest. The forest under examination is a typical managed forest with relatively homogenous stands featuring mostly two canopy layers. Normalized digital surface model (nDSM) derived from LiDAR data was used additionally to examine the effect of height information in tree species mapping. Six different sets of predictor variables (reflectance value of all bands, selected components of a Minimum Noise Fraction (MNF), Vegetation Indices (VI) and each of these sets combined with LiDAR derived height) were explored at each scale. Supervised kernel based (Support Vector Machines) and ensemble based (Random Forest) machine learning algorithms were applied on the dataset to investigate the effect of the classifier. Iterative bootstrap-validation with 100 iterations was performed for classification model building and testing for all the trials. For scale, analysis of overall classification accuracy and kappa values indicated that 8 m spatial

  3. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  4. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  5. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  6. Scalable lidar technique for fire detection

    NASA Astrophysics Data System (ADS)

    Utkin, Andrei B.; Piedade, Fernando; Beixiga, Vasco; Mota, Pedro; Lousã, Pedro

    2014-08-01

    Lidar (light detection and ranging) presents better sensitivity than fire surveillance based on imaging. However, the price of conventional lidar equipment is often too high as compared to passive fire detection instruments. We describe possibilities to downscale the technology. First, a conventional lidar, capable of smoke-plume detection up to ~10 km, may be replaced by an industrially manufactured solid-state laser rangefinder. This reduces the detection range to about 5 km, but decreases the purchase price by one order of magnitude. Further downscaling is possible by constructing the lidar smoke sensor on the basis of a low-cost laser diode.

  7. Integrated multi-sensor fusion for mapping and localization in outdoor environments for mobile robots

    NASA Astrophysics Data System (ADS)

    Emter, Thomas; Petereit, Janko

    2014-05-01

    An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.

  8. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  9. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  10. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  11. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  12. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  13. The 3D Elevation Program: summary for Michigan

    USGS Publications Warehouse

    Carswell, William J., Jr.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features. The Michigan Statewide Authoritative Imagery and Lidar (MiSAIL) program provides statewide lidar coordination with local, State, and national groups in support of 3DEP for Michigan.

  14. Mapping Understory Trees Using Airborne Discrete-Return LIDAR Data

    NASA Astrophysics Data System (ADS)

    Korpela, I.; Hovi, A.; Morsdorf, F.

    2011-09-01

    Understory trees in multi-layer stands are often ignored in forest inventories. Information about them would benefit silviculture, wood procurement and biodiversity management. Cost-efficient inventory methods for the assessment of the presence, density, species- and size-distributions are called for. LiDAR remote sensing is a promising addition to field work. Unlike in passive image data, in which the signals from multiple layers mix, the 3D position of each hot-spot reflection is known in LiDAR data. The overstory however prevents from obtaining a wall-to-wall sample of understory, and measurements are subject to transmission losses. Discriminating between the crowns of dominant and suppressed trees can also be challenging. We examined the potential of LiDAR for the mapping of the understory trees in Scots pine stands (62°N, 24°E), using carefully georeferenced reference data and several LiDAR data sets. We present results that highlight differences in echo-triggering between sensors that affect the near-ground height data. A conceptual model for the transmission losses in the overstory was created and formulated into simple compensation models that reduced the intensity variation in second- and third return data. The task is highly ill-posed in discrete-return LiDAR data, and our models employed the geometry of the overstory as well as the intensity of previous returns. We showed that even first-return data in the understory is subject to losses in the overstory that did not trigger an echo. Even with compensation of the losses, the intensity data was deemed of low value in species discrimination. Area-based LiDAR height metrics that were derived from the data belonging to the crown volume of the understory showed reasonable correlation with the density and mean height of the understory trees. Assessment of the species seems out of reach in discrete-return LiDAR data, which is a drastic drawback.

  15. Large aperture scanning airborne lidar

    NASA Technical Reports Server (NTRS)

    Smith, J.; Bindschadler, R.; Boers, R.; Bufton, J. L.; Clem, D.; Garvin, J.; Melfi, S. H.

    1988-01-01

    A large aperture scanning airborne lidar facility is being developed to provide important new capabilities for airborne lidar sensor systems. The proposed scanning mechanism allows for a large aperture telescope (25 in. diameter) in front of an elliptical flat (25 x 36 in.) turning mirror positioned at a 45 degree angle with respect to the telescope optical axis. The lidar scanning capability will provide opportunities for acquiring new data sets for atmospheric, earth resources, and oceans communities. This completed facility will also make available the opportunity to acquire simulated EOS lidar data on a near global basis. The design and construction of this unique scanning mechanism presents exciting technological challenges of maintaining the turning mirror optical flatness during scanning while exposed to extreme temperatures, ambient pressures, aircraft vibrations, etc.

  16. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  17. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  18. Automatic Model Selection for 3d Reconstruction of Buildings from Satellite Imagary

    NASA Astrophysics Data System (ADS)

    Partovi, T.; Arefi, H.; Krauß, T.; Reinartz, P.

    2013-09-01

    Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM) generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  19. 3D modeling of optically challenging objects.

    PubMed

    Park, Johnny; Kak, Avinash

    2008-01-01

    We present a system for constructing 3D models of real-world objects with optically challenging surfaces. The system utilizes a new range imaging concept called multi-peak range imaging, which stores multiple candidates of range measurements for each point on the object surface. The multiple measurements include the erroneous range data caused by various surface properties that are not ideal for structured-light range sensing. False measurements generated by spurious reflections are eliminated by applying a series of constraint tests. The constraint tests based on local surface and local sensor visibility are applied first to individual range images. The constraint tests based on global consistency of coordinates and visibility are then applied to all range images acquired from different viewpoints. We show the effectiveness of our method by constructing 3D models of five different optically challenging objects. To evaluate the performance of the constraint tests and to examine the effects of the parameters used in the constraint tests, we acquired the ground truth data by painting those objects to suppress the surface-related properties that cause difficulties in range sensing. Experimental results indicate that our method significantly improves upon the traditional methods for constructing reliable 3D models of optically challenging objects. PMID:18192707

  20. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  1. Above ground biomass estimation from lidar and hyperspectral airbone data in West African moist forests.

    NASA Astrophysics Data System (ADS)

    Vaglio Laurin, Gaia; Chen, Qi; Lindsell, Jeremy; Coomes, David; Cazzolla-Gatti, Roberto; Grieco, Elisa; Valentini, Riccardo

    2013-04-01

    The development of sound methods for the estimation of forest parameters such as Above Ground Biomass (AGB) and the need of data for different world regions and ecosystems, are widely recognized issues due to their relevance for both carbon cycle modeling and conservation and policy initiatives, such as the UN REDD+ program (Gibbs et al., 2007). The moist forests of the Upper Guinean Belt are poorly studied ecosystems (Vaglio Laurin et al. 2013) but their role is important due to the drier condition expected along the West African coasts according to future climate change scenarios (Gonzales, 2001). Remote sensing has proven to be an effective tool for AGB retrieval when coupled with field data. Lidar, with its ability to penetrate the canopy provides 3D information and best results. Nevertheless very limited research has been conducted in Africa tropical forests with lidar and none to our knowledge in West Africa. Hyperspectral sensors also offer promising data, being able to evidence very fine radiometric differences in vegetation reflectance. Their usefulness in estimating forest parameters is still under evaluation with contrasting findings (Andersen et al. 2008, Latifi et al. 2012), and additional studies are especially relevant in view of forthcoming satellite hyperspectral missions. In the framework of the EU ERC Africa GHG grant #247349, an airborne campaign collecting lidar and hyperspectral data has been conducted in March 2012 over forests reserves in Sierra Leone and Ghana, characterized by different logging histories and rainfall patterns, and including Gola Rainforest National Park, Ankasa National Park, Bia and Boin Forest Reserves. An Optech Gemini sensor collected the lidar dataset, while an AISA Eagle sensor collected hyperspectral data over 244 VIS-NIR bands. The lidar dataset, with a point density >10 ppm was processed using the TIFFS software (Toolbox for LiDAR Data Filtering and Forest Studies)(Chen 2007). The hyperspectral dataset, geo

  2. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  3. The 3D Elevation Program: summary for Kentucky

    USGS Publications Warehouse

    Carswell, William J., Jr.

    2014-01-01

    Elevation data are essential to a broad range of applications, including forest resources management, wildlife and habitat management, national security, recreation, and many others. For the Commonwealth of Kentucky, elevation data are critical for agriculture and precision farming, natural resources conservation, flood risk management, infrastructure and construction management, forest resources management, geologic resource assessment and hazards mitigation, and other business uses. Today, high-density light detection and ranging (lidar) data are the primary sources for deriving elevation models and other datasets. Federal, State, Tribal, and local agencies work in partnership to (1) replace data that are older and of lower quality and (2) provide coverage where publicly accessible data do not exist. A joint goal of State and Federal partners is to acquire consistent, statewide coverage to support existing and emerging applications enabled by lidar data. “Kentucky from Above,” the Kentucky Aerial Photography and Elevation Data Program http://kygeonet.ky.gov/kyfromabove//., provides statewide lidar coordination with local, Commonwealth, and national groups in support of 3DEP for the Commonwealth. The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high

  4. The 3D Elevation Program: summary for Oregon

    USGS Publications Warehouse

    Carswell, William J., Jr.

    2014-01-01

    Elevation data are essential to a broad range of business uses, including forest resources management, wildlife and habitat management, national security, recreation, and many others. In the State of Oregon, elevation data are critical for river and stream resource management; forest resources management; water supply and quality; infrastructure and construction management; wildfire management, planning and response; natural resources conservation; and other business uses. Today, high-density light detection and ranging (lidar) data are the primary source for deriving elevation models and other datasets. The Oregon Lidar Consortium (OLC), led by the Oregon Department of Geology and Mineral Industries (DOGAMI), has developed partnerships with Federal, State, Tribal, and local agencies to acquire quality level 1 data in areas of shared interest. The goal of OLC partners is to acquire consistent, high-resolution and high-quality statewide coverage to support existing and emerging applications enabled by lidar data. The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  6. Advances in animal ecology from 3D-LiDAR ecosystem mapping.

    PubMed

    Davies, Andrew B; Asner, Gregory P

    2014-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Here, we review insights gained through the application of LiDAR to animal ecology studies, revealing the fundamental importance of structure for animals. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential compared with traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. To develop a better understanding of animal dynamics, future studies will benefit from considering 3D habitat effects in a wider variety of ecosystems and with more taxa. PMID:25457158

  7. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0

  8. Simulation of 3D infrared scenes using random fields model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhang, Jianqi

    2001-09-01

    Analysis and simulation of smart munitions requires imagery for the munition's sensor to view. The traditional infrared background simulations are always limited in the plane scene studies. A new method is described to synthesize the images in 3D view and with various terrains texture. We develop the random fields model and temperature fields to simulate 3D infrared scenes. Generalized long-correlation (GLC) model, one of random field models, will generate both the 3D terrains skeleton data and the terrains texture in this work. To build the terrain mesh with the random fields, digital elevation models (DEM) are introduced in the paper. And texture mapping technology will perform the task of pasting the texture in the concavo-convex surfaces of the 3D scene. The simulation using random fields model is a very available method to produce 3D infrared scene with great randomicity and reality.

  9. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  10. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  11. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  12. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are