Science.gov

Sample records for 3d lidar sensor

  1. Lidar on small UAV for 3D mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael; Larsson, Hâkan

    2014-10-01

    Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities. Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive (multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of 20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the

  2. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  3. 3D flash lidar imager onboard UAV

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Liu, Yilong; Yang, Jiazhi; Zhang, Rongting; Su, Chengjie; Shi, Yujun; Zhou, Xiang

    2014-11-01

    A new generation of flash LiDAR sensor called GLidar-I is presented in this paper. The GLidar-I has been being developed by Guilin University of Technology in cooperating with the Guilin Institute of Optical Communications. The GLidar-I consists of control and process system, transmitting system and receiving system. Each of components has been designed and implemented. The test, experiments and validation for each component have been conducted. The experimental results demonstrate that the researched and developed GLiDAR-I can effectively measure the distance about 13 m at the accuracy level about 11cm in lab.

  4. Structure-From-Motion in 3D Space Using 2D Lidars

    PubMed Central

    Choi, Dong-Geol; Bok, Yunsu; Kim, Jun-Sik; Shim, Inwook; Kweon, In So

    2017-01-01

    This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method. PMID:28165372

  5. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  6. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

    PubMed Central

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005

  7. Georeferenced LiDAR 3D Vine Plantation Map Generation

    PubMed Central

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth®, providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes. PMID:22163952

  8. Georeferenced LiDAR 3D vine plantation map generation.

    PubMed

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth(®), providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes.

  9. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  10. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  11. Evaluation of single photon and Geiger mode Lidar for the 3D Elevation Program

    USGS Publications Warehouse

    Stoker, Jason M.; Abdullah, Qassim; Nayegandhi, Amar; Winehouse, Jayna

    2016-01-01

    Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these new technologies for the 3DEP. While not able to collect data currently to meet USGS lidar base specification, this is partially due to the fact that the specification was written for linear-mode systems specifically. With little effort on part of the manufacturers of the new lidar systems and the USGS Lidar specifications team, data from these systems could soon serve the 3DEP program and its users. Many of the shortcomings noted in this study have been reported to have been corrected or improved upon in the next generation sensors.

  12. Pedestrian and car detection and classification for unmanned ground vehicle using 3D lidar and monocular camera

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Baeg, Seung-Ho; Lee, Kimin; Lee, Hae Seok; Park, SangDeok

    2011-05-01

    This paper describes an object detection and classification method for an Unmanned Ground Vehicle (UGV) using a range sensor and an image sensor. The range sensor and the image sensor are a 3D Light Detection And Ranging (LIDAR) sensor and a monocular camera, respectively. For safe driving of the UGV, pedestrians and cars should be detected on their moving routes of the vehicle. An object detection and classification techniques based on only a camera has an inherent problem. On the view point of detection with a camera, a certain algorithm should extract features and compare them with full input image data. The input image has a lot of information as object and environment. It is hard to make a decision of the classification. The image should have only one reliable object information to solve the problem. In this paper, we introduce a developed 3D LIDAR sensor and apply a fusion method both 3D LIDAR data and camera data. We describe a 3D LIDAR sensor which is developed by LG Innotek Consortium in Korea, named KIDAR-B25. The 3D LIDAR sensor detects objects, determines the object's Region of Interest (ROI) based on 3D information and sends it into a camera region for classification. In the 3D LIDAR domain, we recognize breakpoints using Kalman filter and then make a cluster using a line segment method to determine an object's ROI. In the image domain, we extract the object's feature data from the ROI region using a Haar-like feature method. Finally it is classified as a pedestrian or car using a trained database with an Adaboost algorithm. To verify our system, we make an experiment on the performance of our system which is mounted on a ground vehicle, through field tests in an urban area.

  13. Multi-resolution optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Heinze, Matthias; Schmidt, Ingo; Breitbarth, Martin; Notni, Gunther

    2007-06-01

    A new multi resolution self calibrating optical 3D measurement system using fringe projection technique named "kolibri FLEX multi" will be presented. It can be utilised to acquire the all around shape of small to medium objects, simultaneously. The basic measurement principle is the phasogrammetric approach /1,2,3/ in combination with the method of virtual landmarks for the merging of the 3D single views. The system consists in minimum of two fringe projection sensors. The sensors are mounted on a rotation stage illuminating the object from different directions. The measurement fields of the sensors can be chosen different, here as an example 40mm and 180mm in diameter. In the measurement the object can be scanned at the same time with these two resolutions. Using the method of virtual landmarks both point clouds are calculated within the same world coordinate system resulting in a common 3D-point cloud. The final point cloud includes the overview of the object with low point density (wide field) and a region with high point density (focussed view) at the same time. The advantage of the new method is the possibility to measure with different resolutions at the same object region without any mechanical changes in the system or data post processing. Typical parameters of the system are: the measurement time is 2min for 12 images and the measurement accuracy is below 3μm up to 10 μm. The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  14. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

    PubMed

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-08-10

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m.

  15. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area

    PubMed Central

    Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In

    2016-01-01

    Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936

  16. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    PubMed Central

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m2. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. PMID:27854315

  17. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach.

    PubMed

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-11-16

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.

  18. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  19. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  20. Characterization of 3-D imaging lidar for hazard avoidance and autonomous landing on the Moon

    NASA Astrophysics Data System (ADS)

    Pierrottet, Diego F.; Amzajerdian, Farzin; Meadows, Byron L.; Estes, Robert; Noe, Anna M.

    2007-04-01

    Future robotic and crewed lunar missions will require safe and precision soft-landing at scientifically interesting sites near hazardous terrain features such as craters and rocks or near pre-deployed assets. Presently, NASA is studying the ability of various 3-dimensional imaging sensors particularly lidar/ladar techniques in meeting its lunar landing needs. For this reason, a Sensor Test Range facility has been developed at NASA Langley Research Center for calibration and characterization of potential 3-D imaging sensors. This paper describes the Sensor Test Range facility and its application in characterizing a 3-D imaging ladar. The results of the ladar measurement are reported and compared with simulated image frames generated by a ladar model that was also developed as part of this effort. In addition to allowing for characterization and evaluation of different ladar systems, the ladar measurements at the Sensor Test Range will support further advancement of ladar systems and development of more efficient and accurate image reconstruction algorithms.

  1. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere.

  2. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.

  3. Advances in animal ecology from 3D ecosystem mapping with LiDAR

    NASA Astrophysics Data System (ADS)

    Davies, A.; Asner, G. P.

    2015-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Although the use of LiDAR data is widespread in vegetation science, it has only recently (< 14 years) been applied to animal ecology. Despite such recent application, LiDAR has enabled new insights in the field and revealed the fundamental importance of 3D ecosystem structure for animals. We reviewed the studies to date that have used LiDAR in animal ecology, synthesising the insights gained. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential than traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. LiDAR technology can be applied to animal ecology studies in a wide variety of environments to answer an impressive array of questions. Drawing on case studies from vastly different groups, termites and lions, we further demonstrate the applicability of LiDAR and highlight new understanding, ranging from habitat preference to predator-prey interactions, that would not have been possible from studies restricted to field based methods. We conclude with discussion of how future studies will benefit by using LiDAR to consider 3D habitat effects in a wider variety of ecosystems and with more taxa to develop a better understanding of animal dynamics.

  4. High definition 3D imaging lidar system using CCD

    NASA Astrophysics Data System (ADS)

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  5. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  6. Flexible Piezoresistive Sensors Embedded in 3D Printed Tires

    PubMed Central

    Emon, Md Omar Faruk; Choi, Jae-Won

    2017-01-01

    In this article, we report the development of a flexible, 3D printable piezoresistive pressure sensor capable of measuring force and detecting the location of the force. The multilayer sensor comprises of an ionic liquid-based piezoresistive intermediate layer in between carbon nanotube (CNT)-based stretchable electrodes. A sensor containing an array of different sensing units was embedded on the inner liner surface of a 3D printed tire to provide with force information at different points of contact between the tire and road. Four scaled tires, as well as wheels, were 3D printed using a flexible and a rigid material, respectively, which were later assembled with a 3D-printed chassis. Only one tire was equipped with a sensor and the chassis was driven through a motorized linear stage at different speeds and load conditions to evaluate the sensor performance. The sensor was fabricated via molding and screen printing processes using a commercially available 3D-printable photopolymer as 3D printing is our target manufacturing technique to fabricate the entire tire assembly with the sensor. Results show that the proposed sensors, inserted in the 3D printed tire assembly, could detect forces, as well as their locations, properly. PMID:28327533

  7. Flexible Piezoresistive Sensors Embedded in 3D Printed Tires.

    PubMed

    Emon, Md Omar Faruk; Choi, Jae-Won

    2017-03-22

    In this article, we report the development of a flexible, 3D printable piezoresistive pressure sensor capable of measuring force and detecting the location of the force. The multilayer sensor comprises of an ionic liquid-based piezoresistive intermediate layer in between carbon nanotube (CNT)-based stretchable electrodes. A sensor containing an array of different sensing units was embedded on the inner liner surface of a 3D printed tire to provide with force information at different points of contact between the tire and road. Four scaled tires, as well as wheels, were 3D printed using a flexible and a rigid material, respectively, which were later assembled with a 3D-printed chassis. Only one tire was equipped with a sensor and the chassis was driven through a motorized linear stage at different speeds and load conditions to evaluate the sensor performance. The sensor was fabricated via molding and screen printing processes using a commercially available 3D-printable photopolymer as 3D printing is our target manufacturing technique to fabricate the entire tire assembly with the sensor. Results show that the proposed sensors, inserted in the 3D printed tire assembly, could detect forces, as well as their locations, properly.

  8. TARGET CHARACTERIZATION IN 3D USING INFRARED LIDAR

    SciTech Connect

    B. FOY; B. MCVEY; R. PETRIN; J. TIEE; C. WILSON

    2001-04-01

    We report examples of the use of a scanning tunable CO{sub 2} laser lidar system in the 9-11 {micro}m region to construct images of vegetation and rocks at ranges of up to 5 km from the instrument. Range information is combined with horizontal and vertical distances to yield an image with three spatial dimensions simultaneous with the classification of target type. Object classification is made possible by the distinct spectral signatures of both natural and man-made objects. Several multivariate statistical methods are used to illustrate the degree of discrimination possible among the natural variability of objects in both spectral shape and amplitude.

  9. Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information

    NASA Astrophysics Data System (ADS)

    Hosoi, F.

    2014-12-01

    Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of

  10. The role of terrestrial 3D LiDAR scan in bridge health monitoring

    NASA Astrophysics Data System (ADS)

    Liu, Wanqiu; Chen, Shen-En; Sajedi, Allen; Hauser, Edd

    2010-04-01

    This paper addresses the potential applications of terrestrial 3D LiDAR scanning technologies for bridge monitoring. High resolution ground-based optical-photonic images from LiDAR scans can provide detailed geometric information about a bridge. Applications of simple algorithms can retrieve damage information from the geometric point cloud data, which can be correlated to possible damage quantification including concrete mass loss due to vehicle collisions, large permanent steel deformations, and surface erosions. However, any proposed damage detection technologies should provide information that is relevant and useful to bridge managers for their decision making process. This paper summaries bridge issues that can be detected from the 3D LiDAR technologies, establishes the general approach in using 3D point clouds for damage evaluation and suggests possible bridge state ratings that can be used as supplements to existing bridge management systems (BMS).

  11. 3D city models completion by fusing lidar and image data

    NASA Astrophysics Data System (ADS)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  12. 3D sensors for the HL-LHC

    NASA Astrophysics Data System (ADS)

    Vázquez Furelos, D.; Carulla, M.; Cavallaro, E.; Förster, F.; Grinstein, S.; Lange, J.; López Paz, I.; Manna, M.; Pellegrini, G.; Quirion, D.; Terzo, S.

    2017-01-01

    In order to increase its discovery potential, the Large Hadron Collider (LHC) accelerator will be upgraded in the next decade. The high luminosity LHC (HL-LHC) period requires new sensor technologies to cope with increasing radiation fluences and particle rates. The ATLAS experiment will replace the entire inner tracking detector with a completely new silicon-only system. 3D pixel sensors are promising candidates for the innermost layers of the Pixel detector due to their excellent radiation hardness at low operation voltages and low power dissipation at moderate temperatures. Recent developments of 3D sensors for the HL-LHC are presented.

  13. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  14. Vegetation Structure and 3-D Reconstruction of Forests Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.

    2009-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the light returns sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves and trunks or larger branches. Instrument deployments in the New England region in 2007 and 2009 and in the southern Sierra Nevada of California in 2008 provided the opportunity to test the ability of the instrument to retrieve tree diameters, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. In New England in 2007, mean parameters retrieved from five scans located within six 1-ha stand sites match manually-measured parameters with values of R2 = 0.94-0.99. Processing the scans to retrieve leaf area index (LAI) provided values within the range of those retrieved with other optical instruments and hemispherical photography. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. Stand heights, obtained from foliage profiles, were not significantly different from RH100 values observed by the Laser Vegetation Imaging Sensor in 2003. Data from the California 2008 and New England 2009 deployments were still being processed at the time of abstract submission. With further hardware and software development, Echidna® technology will provide rapid and accurate measurements of forest canopy structure that can replace manual field measurements, leading to more rapid and more accurate calibration and validation of structure mapping techniques using airborne and spaceborne remote sensors. Three

  15. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  16. An Algorithm to Identify and Localize Suitable Dock Locations from 3-D LiDAR Scans

    DTIC Science & Technology

    2013-05-10

    3-D) LiDARs have proved themselves very useful on many autonomous ground vehicles, such as the Google Driverless Car Project, the DARPA, Defense...appear in a typical point cloud data set, relative to other clusters such as cars , trees, boulders, etc. In this algorithm, these values were

  17. Geometric-model-free tracking of extended targets using 3D lidar measurements

    NASA Astrophysics Data System (ADS)

    Steinemann, Philipp; Klappstein, Jens; Dickmann, Juergen; von Hundelshausen, Felix; Wünsche, Hans-Joachim

    2012-06-01

    Tracking of extended targets in high definition, 360-degree 3D-LIDAR (Light Detection and Ranging) measurements is a challenging task and a current research topic. It is a key component in robotic applications, and is relevant to path planning and collision avoidance. This paper proposes a new method without a geometric model to simultaneously track and accumulate 3D-LIDAR measurements of an object. The method itself is based on a particle filter and uses an object-related local 3D grid for each object. No geometric object hypothesis is needed. Accumulation allows coping with occlusions. The prediction step of the particle filter is governed by a motion model consisting of a deterministic and a probabilistic part. Since this paper is focused on tracking ground vehicles, a bicycle model is used for the deterministic part. The probabilistic part depends on the current state of each particle. A function for calculating the current probability density function for state transition is developed. It is derived in detail and based on a database consisting of vehicle dynamics measurements over several hundreds of kilometers. The adaptive probability density function narrows down the gating area for measurement data association. The second part of the proposed method addresses weighting the particles with a cost function. Different 3D-griddependent cost functions are presented and evaluated. Evaluations with real 3D-LIDAR measurements show the performance of the proposed method. The results are also compared to ground truth data.

  18. Fusion of terrestrial LiDAR and tomographic mapping data for 3D karst landform investigation

    NASA Astrophysics Data System (ADS)

    Höfle, B.; Forbriger, M.; Siart, C.; Nowaczinski, E.

    2012-04-01

    Highly detailed topographic information has gained in importance for studying Earth surface landforms and processes. LiDAR has evolved into the state-of-the-art technology for 3D data acquisition on various scales. This multi-sensor system can be operated on several platforms such as airborne LS (ALS), mobile LS (MLS) from moving vehicles or stationary on ground (terrestrial LS, TLS). In karst research the integral investigation of surface and subsurface components of solution depressions (e.g. sediment-filled dolines) is required to gather and quantify the linked geomorphic processes such as sediment flux and limestone dissolution. To acquire the depth of the different subsurface layers, a combination of seismic refraction tomography (SRT) and electrical resistivity tomography (ERT) is increasingly applied. This multi-method approach allows modeling the extension of different subsurface media (i.e. colluvial fill, epikarst zone and underlying basal bedrock). Subsequent fusion of the complementary techniques - LiDAR surface and tomographic subsurface data - first-time enables 3D prospection and visualization as well as quantification of geomorphometric parameters (e.g. depth, volume, slope and aspect). This study introduces a novel GIS-based method for semi-automated fusion of TLS and geophysical data. The study area is located in the Dikti Mountains of East Crete and covers two adjacent dolines. The TLS data was acquired with a Riegl VZ-400 scanner from 12 scan positions located mainly at the doline divide. The scan positions were co-registered using the iterative closest point (ICP) algorithm of RiSCAN PRO. For the digital elevation rasters a resolution of 0.5 m was defined. The digital surface model (DSM) of the study was derived by moving plane interpolation of all laser points (including objects) using the OPALS software. The digital terrain model (DTM) was generated by iteratively "eroding" objects in the DSM by minimum filter, which additionally accounts for

  19. Testbeam and laboratory characterization of CMS 3D pixel sensors

    NASA Astrophysics Data System (ADS)

    Bubna, M.; Bortoletto, D.; Alagoz, E.; Krzywda, A.; Arndt, K.; Shipsey, I.; Bolla, G.; Hinton, N.; Kok, A.; Hansen, T.-E.; Summanwar, A.; Brom, J. M.; Boscardin, M.; Chramowicz, J.; Cumalat, J.; Dalla Betta, G. F.; Dinardo, M.; Godshalk, A.; Jones, M.; Krohn, M. D.; Kumar, A.; Lei, C. M.; Mendicino, R.; Moroni, L.; Perera, L.; Povoli, M.; Prosser, A.; Rivera, R.; Solano, A.; Obertino, M. M.; Kwan, S.; Uplegger, L.; Vigani, L.; Wagner, S.

    2014-07-01

    The pixel detector is the innermost tracking device in CMS, reconstructing interaction vertices and charged particle trajectories. The sensors located in the innermost layers of the pixel detector must be upgraded for the ten-fold increase in luminosity expected at the High-Luminosity LHC (HL-LHC). As a possible replacement for planar sensors, 3D silicon technology is under consideration due to its good performance after high radiation fluence. In this paper, we report on pre- and post- irradiation measurements of CMS 3D pixel sensors with different electrode configurations from different vendors. The effects of irradiation on electrical properties, charge collection efficiency, and position resolution are discussed. Measurements of various test structures for monitoring the fabrication process and studying the bulk and surface properties of silicon sensors, such as MOS capacitors, planar and gate-controlled diodes are also presented.

  20. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  1. Increased Speed: 3D Silicon Sensors. Fast Current Amplifiers

    SciTech Connect

    Parker, Sherwood; Kok, Angela; Kenney, Christopher; Jarron, Pierre; Hasi, Jasmine; Despeisse, Matthieu; Da Via, Cinzia; Anelli, Giovanni; /CERN

    2012-05-07

    The authors describe techniques to make fast, sub-nanosecond time resolution solid-state detector systems using sensors with 3D electrodes, current amplifiers, constant-fraction comparators or fast wave-form recorders, and some of the next steps to reach still faster results.

  2. Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing

    NASA Astrophysics Data System (ADS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-05-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GNC) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 μm Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GNC system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of humanmade geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in realtime for later reconstruction into Digital Elevation Maps (DEM's).

  3. Helicopter Flight Test of 3-D Imaging Flash LIDAR Technology for Safe, Autonomous, and Precise Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-01-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 micrometer Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GN&C system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of human-made geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in real-time for later reconstruction into Digital Elevation Maps (DEM's).

  4. Study on 3D CFBG vibration sensor and its application

    NASA Astrophysics Data System (ADS)

    Nan, Qiuming; Li, Sheng

    2016-03-01

    A novel variety of three dimensional (3D) vibration sensor based on chirped fiber Bragg grating (CFBG) is developed to measure 3D vibration in the mechanical equipment field. The sensor is composed of three independent vibration sensing units. Each unit uses double matched chirped gratings as sensing elements, and the sensing signal is processed by the edge filtering demodulation method. The structure and principle of the sensor are theoretically analyzed, and its performances are obtained from some experiments and the results are as follows: operating frequency range of the sensor is 10 Hz‒500 Hz; acceleration measurement range is 2 m·s-2‒30 m·s-2; sensitivity is about 70 mV/m·s-2; crosstalk coefficient is greater than 22 dB; self-compensation for temperature is available. Eventually the sensor is applied to monitor the vibration state of radiation pump. Seen from its experiments and applications, the sensor has good sensing performances, which can meet a certain requirement for some engineering measurement.

  5. Testbeam and laboratory characterization of 3D CMS pixel sensors

    NASA Astrophysics Data System (ADS)

    Bubna, Mayur; Krzwyda, Alex; Alagoz, Enver; Bortoletto, Daniela

    2013-04-01

    Future generations of colliders, like High Luminosity Large Hadron Collider (HL-LHC) at CERN will deliver much higher radiation doses to the particle detectors, specifically those closer to the beam line. Inner tracker detectors will be the most affected part, causing increased occupancy and radiation damage to Silicon detectors. Planar Silicon sensors have not shown enough radiation hardness for the innermost layers where the radiation doses can reach values around 10^16 neq/cm^2. As a possible replacement of planar pixel sensors, 3D Silicon technology is under consideration as they show higher radiation hardness, and efficiencies comparable to planar sensors. Several 3D CMS pixel designs were fabricated at FBK, CNM, and SINTEF. They were bump bonded to the CMS pixel readout chip and characterized in the laboratory using radioactive source (Sr90), and at Fermilab MTEST beam test facility. Sensors were also irradiated with 800 MeV protons at Los Alamos National Lab to study post-irradiation behavior. In addition, several diodes and test structures from FBK were studied before and after irradiation. We report the laboratory and testbeam measurement results for the irradiated 3D devices.

  6. Feature extraction from 3D lidar point clouds using image processing methods

    NASA Astrophysics Data System (ADS)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  7. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  8. An omnidirectional 3D sensor with line laser scanning

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Gao, Bingtuan; Liu, Chuande; Wang, Peng; Gao, Shuanglei

    2016-09-01

    An active omnidirectional vision owns the advantages of the wide field of view (FOV) imaging, resulting in an entire 3D environment scene, which is promising in the field of robot navigation. However, the existing omnidirectional vision sensors based on line laser can measure points only located on the optical plane of the line laser beam, resulting in the low-resolution reconstruction. Whereas, to improve resolution, some other omnidirectional vision sensors with the capability of projecting 2D encode pattern from projector and curved mirror. However, the astigmatism property of curve mirror causes the low-accuracy reconstruction. To solve the above problems, a rotating polygon scanning mirror is used to scan the object in the vertical direction so that an entire profile of the observed scene can be obtained at high accuracy, without of astigmatism phenomenon. Then, the proposed method is calibrated by a conventional 2D checkerboard plate. The experimental results show that the measurement error of the 3D omnidirectional sensor is approximately 1 mm. Moreover, the reconstruction of objects with different shapes based on the developed sensor is also verified.

  9. Utilization of 3D imaging flash lidar technology for autonomous safe landing on planetary bodies

    NASA Astrophysics Data System (ADS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrottet, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight comptuer can use the 3-D map of terain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing airctarft. The aircraft flight tests were perfomed over Moonlike terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  10. Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  11. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  12. Multi-sensor 3D volumetric reconstruction using CUDA

    NASA Astrophysics Data System (ADS)

    Aliakbarpour, Hadi; Almeida, Luis; Menezes, Paulo; Dias, Jorge

    2011-12-01

    This paper presents a full-body volumetric reconstruction of a person in a scene using a sensor network, where some of them can be mobile. The sensor network is comprised of couples of camera and inertial sensor (IS). Taking advantage of IS, the 3D reconstruction is performed using no planar ground assumption. Moreover, IS in each couple is used to define a virtual camera whose image plane is horizontal and aligned with the earth cardinal directions. The IS is furthermore used to define a set of inertial planes in the scene. The image plane of each virtual camera is projected onto this set of parallel-horizontal inertial-planes, using some adapted homography functions. A parallel processing architecture is proposed in order to perform human real-time volumetric reconstruction. The real-time characteristic is obtained by implementing the reconstruction algorithm on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA). In order to show the effectiveness of the proposed algorithm, a variety of the gestures of a person acting in the scene is reconstructed and demonstrated. Some analyses have been carried out to measure the performance of the algorithm in terms of processing time. The proposed framework has potential to be used by different applications such as smart-room, human behavior analysis and 3D teleconference. [Figure not available: see fulltext.

  13. Multidimensional measurement by using 3-D PMD sensors

    NASA Astrophysics Data System (ADS)

    Ringbeck, T.; Möller, T.; Hagebeuker, B.

    2007-06-01

    Optical Time-of-Flight measurement gives the possibility to enhance 2-D sensors by adding a third dimension using the PMD principle. Various applications in the automotive (e.g. pedestrian safety), industrial, robotics and multimedia fields require robust three-dimensional data (Schwarte et al., 2000). These applications, however, all have different requirements in terms of resolution, speed, distance and target characteristics. PMDTechnologies has developed 3-D sensors based on standard CMOS processes that can provide an optimized solution for a wide field of applications combined with high integration and cost-effective production. These sensors are realized in various layout formats from single pixel solutions for basic applications to low, middle and high resolution matrices for applications requiring more detailed data. Pixel pitches ranging from 10 micrometer up to a 300 micrometer or larger can be realized and give the opportunity to optimize the sensor chip depending on the application. One aspect of all optical sensors based on a time-of-flight principle is the necessity of handling background illumination. This can be achieved by various techniques, such as optical filters and active circuits on chip. The sensors' usage of the in-pixel so-called SBI-circuitry (suppression of background illumination) makes it even possible to overcome the effects of bright ambient light. This paper focuses on this technical requirement. In Sect. 2 we will roughly describe the basic operation principle of PMD sensors. The technical challenges related to the system characteristics of an active optical ranging technique are described in Sect. 3, technical solutions and measurement results are then presented in Sect. 4. We finish this work with an overview of actual PMD sensors and their key parameters (Sect. 5) and some concluding remarks in Sect. 6.

  14. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  15. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  16. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  17. Handheld underwater 3D sensor based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Meng, Lichun; Ramm, Roland; Kühmstedt, Peter; Notni, Gunther

    2015-05-01

    A new, handheld 3D surface scanner was developed especially for underwater use until a diving depth of about 40 meters. Additionally, the sensor is suitable for the outdoor use under bad weather circumstance like splashing water, wind, and bad illumination conditions. The optical components of the sensor are two cameras and one projector. The measurement field is about 250 mm x 200 mm. The depth resolution is about 50 μm and the lateral resolution is approximately 150 μm. The weight of the scanner is about 10 kg. The housing was produced of synthetic powder using a 3D printing technique. The measurement time for one scan is between a third and a half second. The computer for measurement control and data analysis is already integrated into the housing of the scanner. A display on the backside presents the results of each measurement graphically for a real-time evaluation of the user during the recording of the measurement data.

  18. Compact 3D lidar based on optically coupled horizontal and vertical scanning mechanism for the autonomous navigation of robots

    NASA Astrophysics Data System (ADS)

    Lee, Min-Gu; Baeg, Seung-Ho; Lee, Ki-Min; Lee, Hae-Seok; Baeg, Moon-Hong; Park, Jong-Ok; Kim, Hong-Ki

    2011-06-01

    The purpose of this research is to develop a new 3D LIDAR sensor, named KIDAR-B25, for measuring 3D image information with high range accuracy, high speed and compact size. To measure a distance to the target object, we developed a range measurement unit, which is implemented by the direct Time-Of-Flight (TOF) method using TDC chip, a pulsed laser transmitter as an illumination source (pulse width: 10 ns, wavelength: 905 nm, repetition rate: 30kHz, peak power: 20W), and an Si APD receiver, which has high sensitivity and wide bandwidth. Also, we devised a horizontal and vertical scanning mechanism, climbing in a spiral and coupled with the laser optical path. Besides, control electronics such as the motor controller, the signal processing unit, the power distributor and so on, are developed and integrated in a compact assembly. The key point of the 3D LIDAR design proposed in this paper is to use the compact scanning mechanism, which is coupled with optical module horizontally and vertically. This KIDAR-B25 has the same beam propagation axis for emitting pulse laser and receiving reflected one with no optical interference each other. The scanning performance of the KIDAR-B25 has proven with the stable operation up to 20Hz (vertical), 40Hz (horizontal) and the time is about 1.7s to reach the maximum speed. The range of vertical plane can be available up to +/-10 degree FOV (Field Of View) with a 0.25 degree angular resolution. The whole horizontal plane (360 degree) can be also available with 0.125 degree angular resolution. Since the KIDAR-B25 sensor has been planned and developed to be used in mobile robots for navigation, we conducted an outdoor test for evaluating its performance. The experimental results show that the captured 3D imaging data can be usefully applicable to the navigation of the robot for detecting and avoiding the moving objects with real time.

  19. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  20. A new time-to-digital converter for the 3D imaging Lidar

    NASA Astrophysics Data System (ADS)

    Hu, Chunsheng; Huang, Zongsheng; Qin, Shiqiao; Hu, Feng

    2012-10-01

    In order to reduce the negative influence caused by the temperature and voltage variations of the FPGA (Field Programmable Gate Array), we propose a new FPGA-based time-to-digital converter. The proposed converter adopts a high-stability TCXO (Temperature Compensated Crystal Oscillator), a FPGA and a new algorithm, which can significantly decrease the negative influence due to the FPGA temperature and voltage variations. This paper introduces the principle of measurement, main framework, delayer chain structure and delay variation compensation method of the proposed converter, and analyzes its measurement precision and the maximum measurement frequency. The proposed converter is successfully implemented with a Cyclone I FPGA chip and a TCXO. And the implementation method is discussed in detail. The measurement precision of the converter is also validated by experiments. The results show that the mean measurement error is less than 260 ps, the standard deviation is less than 300 ps, and the maximum measurement frequency is above 10 million times per second. The precision and frequency of measurement for the proposed converter are adequate for the 3D imaging lidar (light detection and ranging). As well as the 3D imaging lidar, the converter can be applied to the pulsed laser range finder and other time interval measuring areas.

  1. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  2. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  3. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements <2 mm) and precision (mean point to plane error <2 mm) at an average resolution of at least 390 points per cm(2). Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p < 0.001). The choice of object color can influence measurement range and precision. Although Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  4. Cordless hand-held optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  5. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    NASA Astrophysics Data System (ADS)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  6. High Time Resolution Photon Counting 3D Imaging Sensors

    NASA Astrophysics Data System (ADS)

    Siegmund, O.; Ertley, C.; Vallerga, J.

    2016-09-01

    Novel sealed tube microchannel plate (MCP) detectors using next generation cross strip (XS) anode readouts and high performance electronics have been developed to provide photon counting imaging sensors for Astronomy and high time resolution 3D remote sensing. 18 mm aperture sealed tubes with MCPs and high efficiency Super-GenII or GaAs photocathodes have been implemented to access the visible/NIR regimes for ground based research, astronomical and space sensing applications. The cross strip anode readouts in combination with PXS-II high speed event processing electronics can process high single photon counting event rates at >5 MHz ( 80 ns dead-time per event), and time stamp events to better than 25 ps. Furthermore, we are developing a high speed ASIC version of the electronics for low power/low mass spaceflight applications. For a GaAs tube the peak quantum efficiency has degraded from 30% (at 560 - 850 nm) to 25% over 4 years, but for Super-GenII tubes the peak quantum efficiency of 17% (peak at 550 nm) has remained unchanged for over 7 years. The Super-GenII tubes have a uniform spatial resolution of <30 μm FWHM ( 1 x106 gain) and single event timing resolution of 100 ps (FWHM). The relatively low MCP gain photon counting operation also permits longer overall sensor lifetimes and high local counting rates. Using the high timing resolution, we have demonstrated 3D object imaging with laser pulse (630 nm 45 ps jitter Pilas laser) reflections in single photon counting mode with spatial and depth sensitivity of the order of a few millimeters. A 50 mm Planacon sealed tube was also constructed, using atomic layer deposited microchannel plates which potentially offer better overall sealed tube lifetime, quantum efficiency and gain stability. This tube achieves standard bialkali quantum efficiency levels, is stable, and has been coupled to the PXS-II electronics and used to detect and image fast laser pulse signals.

  7. 3D Vegetation Mapping Using UAVSAR, LVIS, and LIDAR Data Acquisition Methods

    NASA Technical Reports Server (NTRS)

    Calderon, Denice

    2011-01-01

    The overarching objective of this ongoing project is to assess the role of vegetation within climate change. Forests capture carbon, a green house gas, from the atmosphere. Thus, any change, whether, natural (e.g. growth, fire, death) or due to anthropogenic activity (e.g. logging, burning, urbanization) may have a significant impact on the Earth's carbon cycle. Through the use of Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) and NASA's Laser Vegetation Imaging Sensor (LVIS), which are airborne Light Detection and Ranging (LIDAR) remote sensing technologies, we gather data to estimate the amount of carbon contained in forests and how the content changes over time. UAVSAR and LVIS sensors were sent all over the world with the objective of mapping out terrain to gather tree canopy height and biomass data; This data is in turn used to correlate vegetation with the global carbon cycle around the world.

  8. Study of City Landscape Heritage Using Lidar Data and 3d-City Models

    NASA Astrophysics Data System (ADS)

    Rubinowicz, P.; Czynska, K.

    2015-04-01

    In contemporary town planning protection of urban landscape is a significant issue. It regards especially those cities, where urban structures are the result of ages of evolution and layering of historical development process. Specific panoramas and other strategic views with historic city dominants can be an important part of the cultural heritage and genius loci. Other hand, protection of such expositions introduces limitations for future based city development. Digital Earth observation techniques creates new possibilities for more accurate urban studies, monitoring of urbanization processes and measuring of city landscape parameters. The paper examines possibilities of application of Lidar data and digital 3D-city models for: a) evaluation of strategic city views, b) mapping landscape absorption limits, and c) determination protection zones, where the urbanization and buildings height should be limited. In reference to this goal, the paper introduces a method of computational analysis of the city landscape called Visual Protection Surface (VPS). The method allows to emulate a virtual surface above the city including protection of a selected strategic views. The surface defines maximum height of buildings in such a way, that no new facility can be seen in any of selected views. The research includes also analyses of the quality of simulations according the form and precision of the input data: airborne Lidar / DSM model and more advanced 3D-city models (incl. semantic of the geometry, like in CityGML format). The outcome can be a support for professional planning of tall building development. Application of VPS method have been prepared by a computer program developed by the authors (C++). Simulations were carried out on an example of the city of Dresden.

  9. Measuring Complete 3D Vegetation Structure With Airborne Waveform Lidar: A Calibration and Validation With Terrestrial Lidar Derived Voxels

    NASA Astrophysics Data System (ADS)

    Hancock, S.; Anderson, K.; Disney, M.; Gaston, K. J.

    2015-12-01

    Accurate measurements of vegetation are vital to understand habitats and their provision of ecosystem services as well as having applications in satellite calibration, weather modelling and forestry. The majority of humans now live in urban areas and so understanding vegetation structure in these very heterogeneous areas is of importance. A number of previous studies have used airborne lidar (ALS) to characterise canopy height and canopy cover, but very few have fully characterised 3D vegetation, including understorey. Those that have either relied on leaf-off scans to allow unattenuated measurement of understorey or else did not validate. A method for creating a detailed voxel map of urban vegetation, in which the surface area of vegetation within a grid of cuboids (1.5m by 1.5m by 25 cm) is defined, from full-waveform ALS is presented. The ALS was processed with deconvolution and attenuation correction methods. The signal processing was calibrated and validated against synthetic waveforms generated from terrestrial laser scanning (TLS) data, taken as "truth". The TLS data was corrected for partial hits and attenuation using a voxel approach and these steps were validated and found to be accurate. The ALS results were benchmarked against the more common discrete return ALS products (produced automatically by the lidar manufacturer's algorithms) and Gaussian decomposition of full-waveform ALS. The true vegetation profile was accurately recreated by deconvolution. Far more detail was captured by the deconvolved waveform than either the discrete return or Gaussian decomposed ALS, particularly detail within the canopy; vital information for understanding habitats. In the paper, we will present the results with a focus on the methodological steps towards generating the voxel model, and the subsequent quantitative calibration and validation of the modelling approach using TLS. We will discuss the implications of the work for complete vegetation canopy descriptions in

  10. GPS 3-D cockpit displays: Sensors, algorithms, and flight testing

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew Kevin

    Tunnel-in-the-Sky 3-D flight displays have been investigated for several decades as a means of enhancing aircraft safety and utility. However, high costs have prevented commercial development and seriously hindered research into their operational benefits. The rapid development of Differential Global Positioning Systems (DGPS), inexpensive computing power, and ruggedized displays is now changing this situation. A low-cost prototype system was built and flight tested to investigate implementation and operational issues. The display provided an "out the window" 3-D perspective view of the world, letting the pilot see the horizon, runway, and desired flight path even in instrument flight conditions. The flight path was depicted as a tunnel through which the pilot flew the airplane, while predictor symbology provided guidance to minimize path-following errors. Positioning data was supplied, by various DGPS sources including the Stanford Wide Area Augmentation System (WAAS) testbed. A combination of GPS and low-cost inertial sensors provided vehicle heading, pitch, and roll information. Architectural and sensor fusion tradeoffs made during system implementation are discussed. Computational algorithms used to provide guidance on curved paths over the earth geoid are outlined along with display system design issues. It was found that current technology enables low-cost Tunnel-in-the-Sky display systems with a target cost of $20,000 for large-scale commercialization. Extensive testing on Piper Dakota and Beechcraft Queen Air aircraft demonstrated enhanced accuracy and operational flexibility on a variety of complex flight trajectories. These included curved and segmented approaches, traffic patterns flown on instruments, and skywriting by instrument reference. Overlays to existing instrument approaches at airports in California and Alaska were flown and compared with current instrument procedures. These overlays demonstrated improved utility and situational awareness for

  11. Optical 3D sensor for large objects in industrial application

    NASA Astrophysics Data System (ADS)

    Kuhmstedt, Peter; Heinze, Matthias; Himmelreich, Michael; Brauer-Burchardt, Christian; Brakhage, Peter; Notni, Gunther

    2005-06-01

    A new self calibrating optical 3D measurement system using fringe projection technique named "kolibri 1500" is presented. It can be utilised to acquire the all around shape of large objects. The basic measuring principle is the phasogrammetric approach introduced by the authors /1, 2/. The "kolibri 1500" consists of a stationary system with a translation unit for handling of objects. Automatic whole body measurement is achieved by using sensor head rotation and changeable object position, which can be done completely computer controlled. Multi-view measurement is realised by using the concept of virtual reference points. In this way no matching procedures or markers are necessary for the registration of the different images. This makes the system very flexible to realise different measurement tasks. Furthermore, due to self calibrating principle mechanical alterations are compensated. Typical parameters of the system are: the measurement volume extends from 400 mm up to 1500 mm max. length, the measurement time is between 2 min for 12 images up to 20 min for 36 images and the measurement accuracy is below 50μm.The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  12. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  13. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  14. Alignment of a 3-D Sensor and a 2-D Sensor Measuring Azimuth and Elevation

    DTIC Science & Technology

    1992-04-01

    alignment algorithm discussed in this report were developed by the Combat System Technologies Branch (N35) of the Engineering and Technology Division ( N30 ...removal of alignment errors in dissimilar sensors (e.g., active and passive sensors, 2-D and 3-D sensors, etc.). However, the alignment of dissimilar...G21 (CARSOLA) 1 G70 1 G71 1 G71 (BLAIR) 1 G71 (PALEN) 1 G71 (RICE) 10 G73 (FONTANA) 1 N 1 N05 (GASTON) 1 N24 (HENDERSON) 1 N30 1 N33 (ERVIN) 1 N33

  15. An efficient approach to 3D single tree-crown delineation in LiDAR data

    NASA Astrophysics Data System (ADS)

    Mongus, Domen; Žalik, Borut

    2015-10-01

    This paper proposes a new method for 3D delineation of single tree-crowns in LiDAR data by exploiting the complementaries of treetop and tree trunk detections. A unified mathematical framework is provided based on the graph theory, allowing for all the segmentations to be achieved using marker-controlled watersheds. Treetops are defined by detecting concave neighbourhoods within the canopy height model using locally fitted surfaces. These serve as markers for watershed segmentation of the canopy layer where possible oversegmentation is reduced by merging the regions based on their heights, areas, and shapes. Additional tree crowns are delineated from mid- and under-storey layers based on tree trunk detection. A new approach for estimating the verticalities of the points' distributions is proposed for this purpose. The watershed segmentation is then applied on a density function within the voxel space, while boundaries of delineated trees from the canopy layer are used to prevent the overspreading of regions. The experiments show an approximately 6% increase in the efficiency of the proposed treetop definition based on locally fitted surfaces in comparison with the traditionally used local maxima of the smoothed canopy height model. In addition, 4% increase in the efficiency is achieved by the proposed tree trunk detection. Although the tree trunk detection alone is dependent on the data density, supplementing it with the treetop detection the proposed approach is efficient even when dealing with low density point-clouds.

  16. a Novel Method for Automation of 3d Hydro Break Line Generation from LIDAR Data Using Matlab

    NASA Astrophysics Data System (ADS)

    Toscano, G. J.; Gopalam, U.; Devarajan, V.

    2013-08-01

    Water body detection is necessary to generate hydro break lines, which are in turn useful in creating deliverables such as TINs, contours, DEMs from LiDAR data. Hydro flattening follows the detection and delineation of water bodies (lakes, rivers, ponds, reservoirs, streams etc.) with hydro break lines. Manual hydro break line generation is time consuming and expensive. Accuracy and processing time depend on the number of vertices marked for delineation of break lines. Automation with minimal human intervention is desired for this operation. This paper proposes using a novel histogram analysis of LiDAR elevation data and LiDAR intensity data to automatically detect water bodies. Detection of water bodies using elevation information was verified by checking against LiDAR intensity data since the spectral reflectance of water bodies is very small compared with that of land and vegetation in near infra-red wavelength range. Detection of water bodies using LiDAR intensity data was also verified by checking against LiDAR elevation data. False detections were removed using morphological operations and 3D break lines were generated. Finally, a comparison of automatically generated break lines with their semi-automated/manual counterparts was performed to assess the accuracy of the proposed method and the results were discussed.

  17. Lidar Sensors for Autonomous Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.; Reisse, Robert A.; Pierrottet, Diego F.

    2013-01-01

    Lidar technology will play an important role in enabling highly ambitious missions being envisioned for exploration of solar system bodies. Currently, NASA is developing a set of advanced lidar sensors, under the Autonomous Landing and Hazard Avoidance (ALHAT) project, aimed at safe landing of robotic and manned vehicles at designated sites with a high degree of precision. These lidar sensors are an Imaging Flash Lidar capable of generating high resolution three-dimensional elevation maps of the terrain, a Doppler Lidar for providing precision vehicle velocity and altitude, and a Laser Altimeter for measuring distance to the ground and ground contours from high altitudes. The capabilities of these lidar sensors have been demonstrated through four helicopter and one fixed-wing aircraft flight test campaigns conducted from 2008 through 2012 during different phases of their development. Recently, prototype versions of these landing lidars have been completed for integration into a rocket-powered terrestrial free-flyer vehicle (Morpheus) being built by NASA Johnson Space Center. Operating in closed-loop with other ALHAT avionics, the viability of the lidars for future landing missions will be demonstrated. This paper describes the ALHAT lidar sensors and assesses their capabilities and impacts on future landing missions.

  18. Compact, High Energy 2-micron Coherent Doppler Wind Lidar Development for NASA's Future 3-D Winds Measurement from Space

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Koch, Grady; Yu, Jirong; Petros, Mulugeta; Beyon, Jeffrey; Kavaya, Michael J.; Trieu, Bo; Chen, Songsheng; Bai, Yingxin; Petzar, paul; Modlin, Edward A.; Barnes, Bruce W.; Demoz, Belay B.

    2010-01-01

    This paper presents an overview of 2-micron laser transmitter development at NASA Langley Research Center for coherent-detection lidar profiling of winds. The novel high-energy, 2-micron, Ho:Tm:LuLiF laser technology developed at NASA Langley was employed to study laser technology currently envisioned by NASA for future global coherent Doppler lidar winds measurement. The 250 mJ, 10 Hz laser was designed as an integral part of a compact lidar transceiver developed for future aircraft flight. Ground-based wind profiles made with this transceiver will be presented. NASA Langley is currently funded to build complete Doppler lidar systems using this transceiver for the DC-8 aircraft in autonomous operation. Recently, LaRC 2-micron coherent Doppler wind lidar system was selected to contribute to the NASA Science Mission Directorate (SMD) Earth Science Division (ESD) hurricane field experiment in 2010 titled Genesis and Rapid Intensification Processes (GRIP). The Doppler lidar system will measure vertical profiles of horizontal vector winds from the DC-8 aircraft using NASA Langley s existing 2-micron, pulsed, coherent detection, Doppler wind lidar system that is ready for DC-8 integration. The measurements will typically extend from the DC-8 to the earth s surface. They will be highly accurate in both wind magnitude and direction. Displays of the data will be provided in real time on the DC-8. The pulsed Doppler wind lidar of NASA Langley Research Center is much more powerful than past Doppler lidars. The operating range, accuracy, range resolution, and time resolution will be unprecedented. We expect the data to play a key role, combined with the other sensors, in improving understanding and predictive algorithms for hurricane strength and track. 1

  19. 3D Modeling of Landslide in Open-pit Mining on Basis of Ground-based LIDAR Data

    NASA Astrophysics Data System (ADS)

    Hu, H.; Fernandez-Steeger, T. M.; Azzam, R.; Arnhardt, C.

    2009-04-01

    Slope stability is not only an important problem which is related to production and safety in open-pit mining, but also very complex task. There are three main reasons which affect the slope stability as follows: geotechnical factors: Geological structure, lithologic characteristics, water, cohesion, friction, etc.; climate factors: Rainfall and temperature; and external factors: Open-pit mining process, explosion vibration, dynamic load, etc.. The 3rd reason, as a specially one in open-pit mining, not only causes some dynamic problems but also induces the fast geometry changing which must be considered in the following research using numerical simulation and stability analysis. Recently, LIDAR technology has been applied in many fields and places in the world wide. Ground-based LIDAR technology with high accuracy up to 3mm increasingly accommodates to monitoring landslides and detecting changing. LIDAR data collection and preprocessing research have been carried out by Department of Engineering Geology and Hydrogeology at RWTH Aachen University. LIDAR data, so-called a point-cloud of mass data in high density can be obtained in short time for the sensitive open-pit mining area by using ground-based LIDAR. To obtain a consistent surface model, it is necessary to set up multiple scans with the ground-based LIDAR. The framework of data preprocessing which can be implemented by Poly-Works is introduced as follows: gross error detection and elimination, integration of reference frame, model fusion of different scans (re-sampled in overlap region), data reduction without removing the useful information which is a challenge and research front in LIDAR data processing. After data preprocessing, 3D surface model can be directly generated in Poly-Works or generated in other software by building the triangular meshes. The 3D surface landslide model can be applied to further researches such as: real time landslide geometry monitoring due to the fast data collection and

  20. 3-D water vapor field in the atmospheric boundary layer observed with scanning differential absorption lidar

    NASA Astrophysics Data System (ADS)

    Späth, Florian; Behrendt, Andreas; Muppa, Shravan Kumar; Metzendorf, Simon; Riede, Andrea; Wulfmeyer, Volker

    2016-04-01

    High-resolution three-dimensional (3-D) water vapor data of the atmospheric boundary layer (ABL) are required to improve our understanding of land-atmosphere exchange processes. For this purpose, the scanning differential absorption lidar (DIAL) of the University of Hohenheim (UHOH) was developed as well as new analysis tools and visualization methods. The instrument determines 3-D fields of the atmospheric water vapor number density with a temporal resolution of a few seconds and a spatial resolution of up to a few tens of meters. We present three case studies from two field campaigns. In spring 2013, the UHOH DIAL was operated within the scope of the HD(CP)2 Observational Prototype Experiment (HOPE) in western Germany. HD(CP)2 stands for High Definition of Clouds and Precipitation for advancing Climate Prediction and is a German research initiative. Range-height indicator (RHI) scans of the UHOH DIAL show the water vapor heterogeneity within a range of a few kilometers up to an altitude of 2 km and its impact on the formation of clouds at the top of the ABL. The uncertainty of the measured data was assessed for the first time by extending a technique to scanning data, which was formerly applied to vertical time series. Typically, the accuracy of the DIAL measurements is between 0.5 and 0.8 g m-3 (or < 6 %) within the ABL even during daytime. This allows for performing a RHI scan from the surface to an elevation angle of 90° within 10 min. In summer 2014, the UHOH DIAL participated in the Surface Atmosphere Boundary Layer Exchange (SABLE) campaign in southwestern Germany. Conical volume scans were made which reveal multiple water vapor layers in three dimensions. Differences in their heights in different directions can be attributed to different surface elevation. With low-elevation scans in the surface layer, the humidity profiles and gradients can be related to different land cover such as maize, grassland, and forest as well as different surface layer

  1. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  2. 3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.

    2012-12-01

    The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.

  3. Using Lidar-derived 3-D Vegetation Structure Maps to Assist in the Search for the Ivory- billed Woodpecker

    NASA Astrophysics Data System (ADS)

    Hofton, M. A.; Blair, J. B.; Rabine, D.; Dubayah, R.; Greim, H.

    2006-12-01

    Averaging about 20 inches in length, the ivory-billed woodpecker is among the world's largest woodpeckers. It once ranged through swampy forests in the southeastern and lower Mississippi valley states, and until recently was believed to have become extinct in the 1940's when commercial logging destroyed its last known habitat. Recent sightings however, may indicate the birds' survival in remaining bottomland hardwood forest adjacent to the Cache and White Rivers in Arkansas. In June-July 2006, NASA's Laser Vegetation Imaging Sensor (LVIS) was used to map approximately 5000 km2 of the White River National Wildlife Refuge in Arkansas, including sites where recent possible sightings of the bird occurred. LVIS is an airborne, medium- footprint (5- to 25-meter diameter), full waveform-recording, airborne, scanning lidar system which has been used extensively for mapping forest structure, habitat, carbon and natural hazards. The system digitally records the shape of the returning laser echo, or waveform, after its interaction with the various reflecting surfaces of the earth (leaves, branches, ground, etc.), providing a true 3-dimensional record of the surface structure. Data collected included ground elevation and canopy height measurements for each laser footprint, as well as the vertical distribution of intercepted surfaces (the return waveform). Experimental metrics such as canopy structure metrics based on energy quartiles, as well as ground energy/canopy cover and waveform complexity metrics will be derived from each waveform. The project is a collaborative effort between the University of Maryland, NASA, USGS, and the US Fish and Wildlife Service. The LVIS-generated data of the 3- D vegetation structure and underlying terrain will be used as a means to guide local, ground-based search efforts in the upcoming field season as well as identify the remaining areas of habitat suitable for protection should the bird be found.

  4. 3D, Flash, Induced Current Readout for Silicon Sensors

    SciTech Connect

    Parker, Sherwood I.

    2014-06-07

    A new method for silicon microstrip and pixel detector readout using (1) 65 nm-technology current amplifers which can, for the first time with silicon microstrop and pixel detectors, have response times far shorter than the charge collection time (2) 3D trench electrodes large enough to subtend a reasonable solid angle at most track locations and so have adequate sensitivity over a substantial volume of pixel, (3) induced signals in addition to, or in place of, collected charge

  5. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  6. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest.

    PubMed

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-06-12

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived.

  7. Application of Lidar Data and 3D-City Models in Visual Impact Simulations of Tall Buildings

    NASA Astrophysics Data System (ADS)

    Czynska, K.

    2015-04-01

    The paper examines possibilities and limitations of application of Lidar data and digital 3D-city models to provide specialist urban analyses of tall buildings. The location and height of tall buildings is a subject of discussions, conflicts and controversies in many cities. The most important aspect is the visual influence of tall buildings to the city landscape, significant panoramas and other strategic city views. It is an actual issue in contemporary town planning worldwide. Over 50% of high-rise buildings on Earth were built in last 15 years. Tall buildings may be a threat especially for historically developed cities - typical for Europe. Contemporary Earth observation, more and more available Lidar scanning and 3D city models are a new tool for more accurate urban analysis of the tall buildings impact. The article presents appropriate simulation techniques, general assumption of geometric and computational algorithms - available methodologies and individual methods develop by author. The goal is to develop the geometric computation methods for GIS representation of the visual impact of a selected tall building to the structure of large city. In reference to this, the article introduce a Visual Impact Size method (VIS). Presented analyses were developed by application of airborne Lidar / DSM model and more processed models (like CityGML), containing the geometry and it's semantics. Included simulations were carried out on an example of the agglomeration of Berlin.

  8. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  9. 3D-TOF sensors in the automobile

    NASA Astrophysics Data System (ADS)

    Kahlmann, Timo; Oggier, Thierry; Lustenberger, Felix; Blanc, Nicolas; Ingensand, Hilmar

    2005-02-01

    In recent years, pervasive computing has become an important topic in automobile industry. Besides well-known driving assistant systems such as ABS, ASR and ESP several small tools that support driving activities were developed. The most important reason for integrating new technologies is to increase the safety of passengers as well as road users. The Centre Suisse d'Electronique et de Microtechnique SA (CSEM) Zurich presented the CMOS/CCD real-time range-imaging technology, a measurement principle with a wide field of applications in automobiles. The measuring system is based on the time-of-flight measurement principle using actively modulated radiation. Thereby, the radiation is emitted by the camera's illumination system, reflected by objects in the field of view and finally imaged on the CMOS/CCD sensor by the optics. From the acquired radiation, the phase delay and hence the target distance is derived within each individual pixel. From these distance measurements, three-dimensional coordinates can then be calculated. The imaging sensor acquires its environment data in a high-frequency mode and is therefore appropriate for real-time applications. The basis for decisions which contribute to the increased safety is thus available. In this contribution, first the operational principle of the sensor technology is outlined. Further, some implementations of the technology are presented. At the laboratories of the Institute of Geodesy and Photogrammetry (IGP) at ETH Zurich an implementation of the above mentioned measurement principle, the SwissRanger, was investigated in detail. Special attention was focused on the characteristics of this sensor and its calibration. Finally, sample applications within the automobile are introduced.

  10. 3D Underwater Imaging Using Vector Acoustic Sensors

    DTIC Science & Technology

    2007-12-01

    infidelity. Direc- tionality also can be lost when two waves from different directions arrive simultaneously. Figure 3 shows a hodograph of the direct...red) deviated substantially from the axis. The *-direction -0.2 -0.1 0 0.1 0.2 X-axis response Figure 3. Hodograph of the x...the sensor motions caused by the scattered waves from the targets. This hodograph illustrates the directional informa- tion in vector acoustic data

  11. Vertical profiles of the 3-D wind velocity retrieved from multiple wind lidars performing triple range-height-indicator scans

    DOE PAGES

    Debnath, Mithu; Iungo, G. Valerio; Ashton, Ryan; ...

    2017-02-06

    Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved with goodmore » accuracy. However, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.« less

  12. Vertical profiles of the 3-D wind velocity retrieved from multiple wind lidars performing triple range-height-indicator scans

    SciTech Connect

    Debnath, Mithu; Iungo, G. Valerio; Ashton, Ryan; Brewer, W. Alan; Choukulkar, Aditya; Delgado, Ruben; Lundquist, Julie K.; Shaw, William J.; Wilczak, James M.; Wolfe, Daniel

    2017-01-01

    Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved with good accuracy. However, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.

  13. Vertical profiles of the 3-D wind velocity retrieved from multiple wind lidars performing triple range-height-indicator scans

    NASA Astrophysics Data System (ADS)

    Debnath, Mithu; Valerio Iungo, G.; Ashton, Ryan; Brewer, W. Alan; Choukulkar, Aditya; Delgado, Ruben; Lundquist, Julie K.; Shaw, William J.; Wilczak, James M.; Wolfe, Daniel

    2017-02-01

    Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved with good accuracy. However, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.

  14. Colored 3D surface reconstruction using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Guo, Lian-peng; Chen, Xiang-ning; Chen, Ying; Liu, Bin

    2015-03-01

    A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.

  15. Design of 3D scanner for surface contour mapping by ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Munir, Muhammad Miftahul; Billah, Mohammad Aziz; Surachman, Arif; Budiman, Maman; Khairurrijal

    2015-04-01

    Surface mapping systems have attracted great attention due to their potential applications in many areas. In this paper, a simple 3D scanner based on ultrasonic sensor was designed for mapping a contour of object surface. The scanner using an SRF02 ultrasonic sensor, a microcontroller and radio frequency (RF) module to collect coordinates of object surface (point cloud), and sent data to computer. The point cloud collection process was performed by moving two ultrasonic sensors in y and x directions. Both sensors measure a distance from an object surface to a reference point of each sensor. The measurement results represent the point cloud of object surface and the data will be sent to computer via RF module. The point cloud then converted to 3D model using MATLAB. It was found that the object contours can be reconstructed very well by the developed 3D scanner system.

  16. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing

    PubMed Central

    Kesner, Samuel B.; Howe, Robert D.

    2011-01-01

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range. PMID:21874102

  17. 3D position estimation using a single coil and two magnetic field sensors.

    PubMed

    Tadayon, P; Staude, G; Felderhoff, T

    2015-01-01

    This paper presents an algorithm which enables the estimation of relative 3D position of a sensor module with two magnetic sensors with respect to a magnetic field source using a single transmitting coil. Starting with the description of the ambiguity problem caused by using a single coil, a system concept comprising two sensors having a fixed spatial relation to each other is introduced which enables the unique determination of the sensors' position in 3D space. For this purpose, an iterative two-step algorithm is presented: In a first step, the data of one sensor is used to limit the number of possible position solutions. In a second step, the spatial relation between the sensors is used to determine the correct sensor position.

  18. Design Principles for Rapid Prototyping Forces Sensors using 3D Printing.

    PubMed

    Kesner, Samuel B; Howe, Robert D

    2011-07-21

    Force sensors provide critical information for robot manipulators, manufacturing processes, and haptic interfaces. Commercial force sensors, however, are generally not adapted to specific system requirements, resulting in sensors with excess size, cost, and fragility. To overcome these issues, 3D printers can be used to create components for the quick and inexpensive development of force sensors. Limitations of this rapid prototyping technology, however, require specialized design principles. In this paper, we discuss techniques for rapidly developing simple force sensors, including selecting and attaching metal flexures, using inexpensive and simple displacement transducers, and 3D printing features to aid in assembly. These design methods are illustrated through the design and fabrication of a miniature force sensor for the tip of a robotic catheter system. The resulting force sensor prototype can measure forces with an accuracy of as low as 2% of the 10 N measurement range.

  19. Optimized data processing for an optical 3D sensor based on flying triangulation

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja; Arold, Oliver; Häusler, Gerd; Gurov, Igor; Volkov, Mikhail

    2013-05-01

    We present data processing methods for an optical 3D sensor based on the measurement principle "Flying Triangulation". The principle enables a motion-robust acquisition of the 3D shape of even complex objects: A hand-held sensor is freely guided around the object while real-time feedback of the measurement progress is delivered during the captioning. Although of high precision, the resulting 3D data usually may exhibit some weaknesses: e.g. outliers might be present and the data size might be too large. We describe the measurement principle and the data processing and conclude with measurement results.

  20. 3D-Modeling of Vegetation from Lidar Point Clouds and Assessment of its Impact on Façade Solar Irradiation

    NASA Astrophysics Data System (ADS)

    Peronato, G.; Rey, E.; Andersen, M.

    2016-10-01

    The presence of vegetation can significantly affect the solar irradiation received on building surfaces. Due to the complex shape and seasonal variability of vegetation geometry, this topic has gained much attention from researchers. However, existing methods are limited to rooftops as they are based on 2.5D geometry and use simplified radiation algorithms based on view-sheds. This work contributes to overcoming some of these limitations, providing support for 3D geometry to include facades. Thanks to the use of ray-tracing-based simulations and detailed characterization of the 3D surfaces, we can also account for inter-reflections, which might have a significant impact on façade irradiation. In order to construct confidence intervals on our results, we modeled vegetation from LiDAR point clouds as 3D convex hulls, which provide the biggest volume and hence the most conservative obstruction scenario. The limits of the confidence intervals were characterized with some extreme scenarios (e.g. opaque trees and absence of trees). Results show that uncertainty can vary significantly depending on the characteristics of the urban area and the granularity of the analysis (sensor, building and group of buildings). We argue that this method can give us a better understanding of the uncertainties due to vegetation in the assessment of solar irradiation in urban environments, and therefore, the potential for the installation of solar energy systems.

  1. Uas Topographic Mapping with Velodyne LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Toth, C.; Grejner-Brzezinska, D.

    2016-06-01

    Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.

  2. Test beam results of 3D silicon pixel sensors for the ATLAS upgrade

    NASA Astrophysics Data System (ADS)

    Grenier, P.; Alimonti, G.; Barbero, M.; Bates, R.; Bolle, E.; Borri, M.; Boscardin, M.; Buttar, C.; Capua, M.; Cavalli-Sforza, M.; Cobal, M.; Cristofoli, A.; Dalla Betta, G.-F.; Darbo, G.; Da Vià, C.; Devetak, E.; DeWilde, B.; Di Girolamo, B.; Dobos, D.; Einsweiler, K.; Esseni, D.; Fazio, S.; Fleta, C.; Freestone, J.; Gallrapp, C.; Garcia-Sciveres, M.; Gariano, G.; Gemme, C.; Giordani, M.-P.; Gjersdal, H.; Grinstein, S.; Hansen, T.; Hansen, T.-E.; Hansson, P.; Hasi, J.; Helle, K.; Hoeferkamp, M.; Hügging, F.; Jackson, P.; Jakobs, K.; Kalliopuska, J.; Karagounis, M.; Kenney, C.; Köhler, M.; Kocian, M.; Kok, A.; Kolya, S.; Korokolov, I.; Kostyukhin, V.; Krüger, H.; La Rosa, A.; Lai, C. H.; Lietaer, N.; Lozano, M.; Mastroberardino, A.; Micelli, A.; Nellist, C.; Oja, A.; Oshea, V.; Padilla, C.; Palestri, P.; Parker, S.; Parzefall, U.; Pater, J.; Pellegrini, G.; Pernegger, H.; Piemonte, C.; Pospisil, S.; Povoli, M.; Roe, S.; Rohne, O.; Ronchin, S.; Rovani, A.; Ruscino, E.; Sandaker, H.; Seidel, S.; Selmi, L.; Silverstein, D.; Sjøbæk, K.; Slavicek, T.; Stapnes, S.; Stugu, B.; Stupak, J.; Su, D.; Susinno, G.; Thompson, R.; Tsung, J.-W.; Tsybychev, D.; Watts, S. J.; Wermes, N.; Young, C.; Zorzi, N.

    2011-05-01

    Results on beam tests of 3D silicon pixel sensors aimed at the ATLAS Insertable B-Layer and High Luminosity LHC (HL-LHC) upgrades are presented. Measurements include charge collection, tracking efficiency and charge sharing between pixel cells, as a function of track incident angle, and were performed with and without a 1.6 T magnetic field oriented as the ATLAS inner detector solenoid field. Sensors were bump-bonded to the front-end chip currently used in the ATLAS pixel detector. Full 3D sensors, with electrodes penetrating through the entire wafer thickness and active edge, and double-sided 3D sensors with partially overlapping bias and read-out electrodes were tested and showed comparable performance.

  3. Volumetric LiDAR scanning of a wind turbine wake and comparison with a 3D analytical wake model

    NASA Astrophysics Data System (ADS)

    Carbajo Fuertes, Fernando; Porté-Agel, Fernando

    2016-04-01

    A correct estimation of the future power production is of capital importance whenever the feasibility of a future wind farm is being studied. This power estimation relies mostly on three aspects: (1) a reliable measurement of the wind resource in the area, (2) a well-established power curve of the future wind turbines and, (3) an accurate characterization of the wake effects; the latter being arguably the most challenging one due to the complexity of the phenomenon and the lack of extensive full-scale data sets that could be used to validate analytical or numerical models. The current project addresses the problem of obtaining a volumetric description of a full-scale wake of a 2MW wind turbine in terms of velocity deficit and turbulence intensity using three scanning wind LiDARs and two sonic anemometers. The characterization of the upstream flow conditions is done by one scanning LiDAR and two sonic anemometers, which have been used to calculate incoming vertical profiles of horizontal wind speed, wind direction and an approximation to turbulence intensity, as well as the thermal stability of the atmospheric boundary layer. The characterization of the wake is done by two scanning LiDARs working simultaneously and pointing downstream from the base of the wind turbine. The direct LiDAR measurements in terms of radial wind speed can be corrected using the upstream conditions in order to provide good estimations of the horizontal wind speed at any point downstream of the wind turbine. All this data combined allow for the volumetric reconstruction of the wake in terms of velocity deficit as well as turbulence intensity. Finally, the predictions of a 3D analytical model [1] are compared to the 3D LiDAR measurements of the wind turbine. The model is derived by applying the laws of conservation of mass and momentum and assuming a Gaussian distribution for the velocity deficit in the wake. This model has already been validated using high resolution wind-tunnel measurements

  4. 3D turbulence measurements in inhomogeneous boundary layers with three wind LiDARs

    NASA Astrophysics Data System (ADS)

    Carbajo Fuertes, Fernando; Valerio Iungo, Giacomo; Porté-Agel, Fernando

    2014-05-01

    One of the most challenging tasks in atmospheric anemometry is obtaining reliable turbulence measurements of inhomogeneous boundary layers at heights or in locations where is not possible or convenient to install tower-based measurement systems, e.g. mountainous terrain, cities, wind farms, etc. Wind LiDARs are being extensively used for the measurement of averaged vertical wind profiles, but they can only successfully accomplish this task under the limiting conditions of flat terrain and horizontally homogeneous flow. Moreover, it has been shown that common scanning strategies introduce large systematic errors in turbulence measurements, regardless of the characteristics of the flow addressed. From the point of view of research, there exist a variety of techniques and scanning strategies to estimate different turbulence quantities but most of them rely in the combination of raw measurements with atmospheric models. Most of those models are only valid under the assumption of horizontal homogeneity. The limitations stated above can be overcome by a new triple LiDAR technique which uses simultaneous measurements from three intersecting Doppler wind LiDARs. It allows for the reconstruction of the three-dimensional velocity vector in time as well as local velocity gradients without the need of any turbulence model and with minimal assumptions [EGU2013-9670]. The triple LiDAR technique has been applied to the study of the flow over the campus of EPFL in Lausanne (Switzerland). The results show the potential of the technique for the measurement of turbulence in highly complex boundary layer flows. The technique is particularly useful for micrometeorology and wind engineering studies.

  5. Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data

    NASA Astrophysics Data System (ADS)

    Zhou, G.

    This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.

  6. 3D Wind Reconstruction and Turbulence Estimation in the Boundary Layer from Doppler Lidar Measurements using Particle Method

    NASA Astrophysics Data System (ADS)

    Rottner, L.; Baehr, C.

    2014-12-01

    Turbulent phenomena in the atmospheric boundary layer (ABL) are characterized by small spatial and temporal scales which make them difficult to observe and to model.New remote sensing instruments, like Doppler Lidar, give access to fine and high-frequency observations of wind in the ABL. This study suggests to use a method of nonlinear estimation based on these observations to reconstruct 3D wind in a hemispheric volume, and to estimate atmospheric turbulent parameters. The wind observations are associated to particle systems which are driven by a local turbulence model. The particles have both fluid and stochastic properties. Therefore, spatial averages and covariances may be deduced from the particles. Among the innovative aspects, we point out the absence of the common hypothesis of stationary-ergodic turbulence and the non-use of particle model closure hypothesis. Every time observations are available, 3D wind is reconstructed and turbulent parameters such as turbulent kinectic energy, dissipation rate, and Turbulent Intensity (TI) are provided. This study presents some results obtained using real wind measurements provided by a five lines of sight Lidar. Compared with classical methods (e.g. eddy covariance) our technic renders equivalent long time results. Moreover it provides finer and real time turbulence estimations. To assess this new method, we suggest computing independently TI using different observation types. First anemometer data are used to have TI reference.Then raw and filtered Lidar observations have also been compared. The TI obtained from raw data is significantly higher than the reference one, whereas the TI estimated with the new algorithm has the same order.In this study we have presented a new class of algorithm to reconstruct local random media. It offers a new way to understand turbulence in the ABL, in both stable or convective conditions. Later, it could be used to refine turbulence parametrization in meteorological meso-scale models.

  7. Study on embedding fiber Bragg grating sensor into the 3D printing structure for health monitoring

    NASA Astrophysics Data System (ADS)

    Li, Ruiya; Tan, Yuegang; Zhou, Zude; Fang, Liang; Chen, Yiyang

    2016-10-01

    3D printing technology is a rapidly developing manufacturing technology, which is known as a core technology in the third industrial revolution. With the continuous improvement of the application of 3D printing products, the health monitoring of the 3D printing structure is particularly important. Fiber Bragg grating (FBG) sensing technology is a new type of optical sensing technology with unique advantages comparing to traditional sensing technology, and it has great application prospects in structural health monitoring. In this paper, the FBG sensors embedded in the internal structure of the 3D printing were used to monitor the static and dynamic strain variation of 3D printing structure during loading process. The theoretical result and experimental result has good consistency and the characteristic frequency detected by FBG sensor is consistent with the testing results of traditional accelerator in the dynamic experiment. The results of this paper preliminary validate that FBG embedded in the 3D printing structure can effectively detecting the static and dynamic stain change of the 3D printing structure, which provide some guidance for the health monitoring of 3D printing structure.

  8. Comparison of 2D and 3D Displays and Sensor Fusion for Threat Detection, Surveillance, and Telepresence

    DTIC Science & Technology

    2003-05-19

    Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence T. Meitzler, Ph. D.a, D. Bednarz, Ph.D.a, K...camouflaged threats are compared on a two dimensional (2D) display and a three dimensional ( 3D ) display. A 3D display is compared alongside a 2D...technologies that take advantage of 3D and sensor fusion will be discussed. 1. INTRODUCTION Computer driven interactive 3D imaging has made

  9. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data

    PubMed Central

    Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho

    2017-01-01

    With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for

  10. Toward 3D reconstruction of outdoor scenes using an MMW radar and a monocular vision sensor.

    PubMed

    Natour, Ghina El; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-10-14

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors' coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors' geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data.

  11. 3D-FBK pixel sensors with CMS readout: First test results

    NASA Astrophysics Data System (ADS)

    Obertino, M.; Solano, A.; Vilela Pereira, A.; Alagoz, E.; Andresen, J.; Arndt, K.; Bolla, G.; Bortoletto, D.; Boscardin, M.; Brosius, R.; Bubna, M.; Dalla Betta, G.-F.; Jensen, F.; Krzywda, A.; Kumar, A.; Kwan, S.; Lei, C. M.; Menasce, D.; Moroni, L.; Ngadiuba, J.; Osipenkov, I.; Perera, L.; Povoli, M.; Prosser, A.; Rivera, R.; Shipsey, I.; Tan, P.; Terzo, S.; Uplegger, L.; Wagner, S.; Dinardo, M.

    2013-08-01

    Silicon 3D detectors consist of an array of columnar electrodes of both doping types which penetrate entirely in the detector bulk, perpendicularly to the surface. They are emerging as one of the most promising technologies for innermost layers of tracking devices for the foreseen upgrades of the LHC. Until recently, properties of 3D sensors have been investigated mostly with ATLAS readout electronics. 3D pixel sensors compatible with the CMS readout were first fabricated at SINTEF (Oslo, Norway), and more recently at FBK (Trento, Italy) and CNM (Barcelona, Spain). Several sensors with different electrode configurations, bump-bonded with the CMS pixel PSI46 readout chip, were characterized in laboratory and tested at Fermilab with a proton beam of 120 GeV/c. Preliminary results of the data analysis are presented.

  12. Simulations of 3D-Si sensors for the innermost layer of the ATLAS pixel upgrade

    NASA Astrophysics Data System (ADS)

    Baselga, M.; Pellegrini, G.; Quirion, D.

    2017-03-01

    The LHC is expected to reach luminosities up to 3000 fb-1 and the innermost layer of the ATLAS upgrade plans to cope with higher occupancy and to decrease the pixel size. 3D-Si sensors are a good candidate for the innermost layer of the ATLAS pixel upgrade since they exhibit good performance under high fluences and the new designs will have smaller pixel size to fulfill the electronics expectations. This paper reports TCAD simulations of the 3D-Si sensors designed at IMB-CNM with non-passing-through columns that are being fabricated for the next innermost layer of the ATLAS pixel upgrade. It shows the charge collection response before and after irradiation, and the response of 3D-Si sensors located at large η angles.

  13. Airborne Coherent Lidar for Advanced In-Flight Measurements (ACLAIM) Flight Testing of the Lidar Sensor

    NASA Technical Reports Server (NTRS)

    Soreide, David C.; Bogue, Rodney K.; Ehernberger, L. J.; Hannon, Stephen M.; Bowdle, David A.

    2000-01-01

    The purpose of the ACLAIM program is ultimately to establish the viability of light detection and ranging (lidar) as a forward-looking sensor for turbulence. The goals of this flight test are to: 1) demonstrate that the ACLAIM lidar system operates reliably in a flight test environment, 2) measure the performance of the lidar as a function of the aerosol backscatter coefficient (beta), 3) use the lidar system to measure atmospheric turbulence and compare these measurements to onboard gust measurements, and 4) make measurements of the aerosol backscatter coefficient, its probability distribution and spatial distribution. The scope of this paper is to briefly describe the ACLAIM system and present examples of ACLAIM operation in flight, including comparisons with independent measurements of wind gusts, gust-induced normal acceleration, and the derived eddy dissipation rate.

  14. A Simple, Low-Cost Conductive Composite Material for 3D Printing of Electronic Sensors

    PubMed Central

    Leigh, Simon J.; Bradley, Robert J.; Purssell, Christopher P.; Billson, Duncan R.; Hutchins, David A.

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes (‘rapid prototyping’) before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term ‘carbomorph’ and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes. PMID:23185319

  15. Pt nanoparticles functionalized 3D SnO2 nanoflowers for gas sensor application

    NASA Astrophysics Data System (ADS)

    Liu, Yinglin; Huang, Jing; Yang, Jiedi; Wang, Shurong

    2017-04-01

    3D SnO2 nanoflowers (NFs) assembled by rod-like nanostructures were synthesized by a facile hydrothermal method only using simple and inexpensive SnCl4·5H2O and NaOH as the starting materials, without using any surfactants or templates. The as-synthesized 3D SnO2 NFs were further functionalized by Pt nanoparticles (NPs) by a simple ammonia precipitate method, and the derived Pt NP-functionalized 3D SnO2 NFs were further investigated for gas sensor application using ethanol as a probe gas. Obtained results showed that the Pt NP-functionalized 3D SnO2 NF sensor exhibited much higher response in comparison with pure SnO2 sensor, altogether with short response/recovery times and good reproducibility. The enhanced gas sensing performances could be attributed to spill-over effect of Pt NPs for promoting gas sensing reactions, the synergic electronic interaction between Pt NPs and SnO2 support, the high surface-to-volume ratio and good electron mobility of the 1D SnO2 nanorod units, and unique 3D hierarchical flower-like nanostructures. It is also expected that the as-prepared 3D SnO2 NFs and Pt NP-functionalized product can be used in other fields such as optoelectronic devices, Li-ion battery and dye sensitized solar cells.

  16. A simple, low-cost conductive composite material for 3D printing of electronic sensors.

    PubMed

    Leigh, Simon J; Bradley, Robert J; Purssell, Christopher P; Billson, Duncan R; Hutchins, David A

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.

  17. Automatic 3D Building Model Generation by Integrating LiDAR and Aerial Images Using a Hybrid Approach

    NASA Astrophysics Data System (ADS)

    Kwak, Eunju

    The development of sensor technologies and the increase in user requirements have resulted in many different approaches for efficient building model generation. Three-dimensional building models are important in various applications, such as disaster management and urban planning. Despite this importance, generation of these models lacks economical and reliable techniques which take advantage of the available multi-sensory data from single and multiple platforms. Therefore, this research develops a framework for fully-automated building model generation by integrating data-driven and model-driven methods as well as exploiting the advantages of images and LiDAR datasets. The building model generation starts by employing LiDAR data for building detection and approximate boundary determination. The generated building boundaries are then integrated into a model-based image processing strategy, because LiDAR derived planes show irregular boundaries due to the nature of LiDAR point acquisition. The focus of the research is generating models for the buildings with right-angled-corners, which can be described with a collection of rectangles (e.g., L-shape, T-shape, U-shape, gable roofs, and more complex building shapes which are combinations of the aforementioned shapes), under the assumption that the majority of the buildings in urban areas belong to this category. Therefore, by applying the Minimum Bounding Rectangle (MBR) algorithm recursively, the LiDAR boundaries are decomposed into sets of rectangles for further processing. At the same time the quality of the MBRs are examined to verify that the buildings, from which the boundaries are generated, are buildings with right-angled-corners. These rectangles are preliminary model primitives. The parameters that define the model primitives are adjusted using detected edges in the imagery through the least-squares adjustment procedure, i.e., model-based image fitting. The level of detail in the final Digital Building Model

  18. Investigation of leakage current and breakdown voltage in irradiated double-sided 3D silicon sensors

    NASA Astrophysics Data System (ADS)

    Dalla Betta, G.-F.; Ayllon, N.; Boscardin, M.; Hoeferkamp, M.; Mattiazzo, S.; McDuff, H.; Mendicino, R.; Povoli, M.; Seidel, S.; Sultan, D. M. S.; Zorzi, N.

    2016-09-01

    We report on an experimental study aimed at gaining deeper insight into the leakage current and breakdown voltage of irradiated double-sided 3D silicon sensors from FBK, so as to improve both the design and the fabrication technology for use at future hadron colliders such as the High Luminosity LHC. Several 3D diode samples of different technologies and layout are considered, as well as several irradiations with different particle types. While the leakage current follows the expected linear trend with radiation fluence, the breakdown voltage is found to depend on both the bulk damage and the surface damage, and its values can vary significantly with sensor geometry and process details.

  19. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-06-18

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor.

  20. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  1. How integrating 3D LiDAR data in the dike surveillance protocol: The French case

    NASA Astrophysics Data System (ADS)

    Bretar, F.; Mériaux, P.; Fauchard, C.

    2012-04-01

    carried out. A LiDAR system is able to acquire data on a dike structure of up to 80 km per day, which makes the use of this technique also valuable in case of emergency situations. It provides additional valuable products like precious information on dike slopes and crest or their near environment (river banks, etc.). Moreover, in case of vegetation, LiDAR data makes possible to study hidden structures or defaults from images like the erosion of riverbanks under forestry vegetation. The possibility of studying the vegetation is also of high importance: the development of woody vegetation near or onto the dike is a major risk factor. Surface singularities are often signs of disorder or suspected disorder in the dike itself: for example a subsidence or a sinkhole on a ridge may result from internal erosion collapse. Finally, high resolution topographic data contribute to build specific geomechanical model of the dike that, after incorporating data provided by geophysical and geotechnical surveys, are integrated in the calculations of the structure stability. Integrating the regular use of LiDAR data in the dike surveillance protocol is not yet operational in France. However, the high number of French stakeholders at the national level (on average, there is one stakeholder for only 8-9km of dike !) and the real added value of LiDAR data makes a spatial data infrastructure valuable (webservices for processing the data, consulting and filling the database on the field when performing the local diagnosis)

  2. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  3. 3D Scan of Ornamental Column (huabiao) Using Terrestrial LiDAR and Hand-held Imager

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Wang, C.; Xi, X.

    2015-08-01

    In ancient China, Huabiao was a type of ornamental column used to decorate important buildings. We carried out 3D scan of a Huabiao located in Peking University, China. This Huabiao was built no later than 1742. It is carved by white marble, 8 meters in height. Clouds and various postures of dragons are carved on its body. Two instruments were used to acquire the point cloud of this Huabiao, a terrestrial LiDAR (Riegl VZ-1000) and a hand-held imager (Mantis Vision F5). In this paper, the details of the experiment were described, including the differences between these two instruments, such as working principle, spatial resolution, accuracy, instrument dimension and working flow. The point clouds obtained respectively by these two instruments were compared, and the registered point cloud of Huabiao was also presented. These should be of interest and helpful for the research communities of archaeology and heritage.

  4. Sensitivity and 3 dB Bandwidth in Single and Series-Connected Tunneling Magnetoresistive Sensors

    PubMed Central

    Dąbek, Michał; Wiśniowski, Piotr; Stobiecki, Tomasz; Wrona, Jerzy; Cardoso, Susana; Freitas, Paulo P.

    2016-01-01

    As single tunneling magnetoresistive (TMR) sensor performance in modern high-speed applications is limited by breakdown voltage and saturation of the sensitivity, for higher voltage applications (i.e., compatible to 1.8 V, 3.3 V or 5 V standards) practically only a series connection can be applied. Thus, in this study we focused on sensitivity, 3 dB bandwidth and sensitivity-bandwidth product (SBP) dependence on the DC bias voltage in single and series-connected TMR sensors. We show that, below breakdown voltage, the strong bias influence on sensitivity and the 3 dB frequency of a single sensor results in higher SBP than in a series connection. However, the sensitivity saturation limits the single sensor SBP which, under 1 V, reaches the same level of 2000 MHz∙V/T as in a series connection. Above the single sensor breakdown voltage, linear sensitivity dependence on the bias and the constant 3 dB bandwidth of the series connection enable increasing its SBP up to nearly 10,000 MHz∙V/T under 5 V. Thus, although by tuning bias voltage it is possible to control the sensitivity-bandwidth product, the choice between the single TMR sensor and the series connection is crucial for the optimal performance in the high frequency range. PMID:27809223

  5. Sensitivity and 3 dB Bandwidth in Single and Series-Connected Tunneling Magnetoresistive Sensors.

    PubMed

    Dąbek, Michał; Wiśniowski, Piotr; Stobiecki, Tomasz; Wrona, Jerzy; Cardoso, Susana; Freitas, Paulo P

    2016-10-31

    As single tunneling magnetoresistive (TMR) sensor performance in modern high-speed applications is limited by breakdown voltage and saturation of the sensitivity, for higher voltage applications (i.e., compatible to 1.8 V, 3.3 V or 5 V standards) practically only a series connection can be applied. Thus, in this study we focused on sensitivity, 3 dB bandwidth and sensitivity-bandwidth product (SBP) dependence on the DC bias voltage in single and series-connected TMR sensors. We show that, below breakdown voltage, the strong bias influence on sensitivity and the 3 dB frequency of a single sensor results in higher SBP than in a series connection. However, the sensitivity saturation limits the single sensor SBP which, under 1 V, reaches the same level of 2000 MHz∙V/T as in a series connection. Above the single sensor breakdown voltage, linear sensitivity dependence on the bias and the constant 3 dB bandwidth of the series connection enable increasing its SBP up to nearly 10,000 MHz∙V/T under 5 V. Thus, although by tuning bias voltage it is possible to control the sensitivity-bandwidth product, the choice between the single TMR sensor and the series connection is crucial for the optimal performance in the high frequency range.

  6. A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System

    PubMed Central

    Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum

    2017-01-01

    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414

  7. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    PubMed Central

    El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-01-01

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874

  8. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  9. 3D-FBK Pixel Sensors: Recent Beam Tests Results with Irradiated Devices

    SciTech Connect

    Micelli, A.; Helle, K.; Sandaker, H.; Stugu, B.; Barbero, M.; Hugging, F.; Karagounis, M.; Kostyukhin, V.; Kruger, H.; Tsung, J.W.; Wermes, N.; Capua, M.; Fazio, S.; Mastroberardino, A.; Susinno, G.; Gallrapp, C.; Di Girolamo, B.; Dobos, D.; La Rosa, A.; Pernegger, H.; Roe, S.; /CERN /Prague, Tech. U. /Prague, Tech. U. /Freiburg U. /Freiburg U. /Freiburg U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /INFN, Genoa /Genoa U. /Glasgow U. /Glasgow U. /Glasgow U. /Hawaii U. /Barcelona, IFAE /Barcelona, IFAE /LBL, Berkeley /Barcelona, IFAE /LBL, Berkeley /LBL, Berkeley /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /Manchester U. /New Mexico U. /New Mexico U. /Oslo U. /Oslo U. /Oslo U. /Oslo U. /Oslo U. /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SLAC /SUNY, Stony Brook /SUNY, Stony Brook /SUNY, Stony Brook /INFN, Trento /Trento U. /INFN, Trento /Trento U. /INFN, Trento /Trento U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /INFN, Trieste /Udine U. /Barcelona, Inst. Microelectron. /Barcelona, Inst. Microelectron. /Barcelona, Inst. Microelectron. /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /Fond. Bruno Kessler, Trento /SINTEF, Oslo /SINTEF, Oslo /SINTEF, Oslo /SINTEF, Oslo /VTT Electronics, Espoo /VTT Electronics, Espoo

    2012-04-30

    The Pixel Detector is the innermost part of the ATLAS experiment tracking device at the Large Hadron Collider, and plays a key role in the reconstruction of the primary vertices from the collisions and secondary vertices produced by short-lived particles. To cope with the high level of radiation produced during the collider operation, it is planned to add to the present three layers of silicon pixel sensors which constitute the Pixel Detector, an additional layer (Insertable B-Layer, or IBL) of sensors. 3D silicon sensors are one of the technologies which are under study for the IBL. 3D silicon technology is an innovative combination of very-large-scale integration and Micro-Electro-Mechanical-Systems where electrodes are fabricated inside the silicon bulk instead of being implanted on the wafer surfaces. 3D sensors, with electrodes fully or partially penetrating the silicon substrate, are currently fabricated at different processing facilities in Europe and USA. This paper reports on the 2010 June beam test results for irradiated 3D devices produced at FBK (Trento, Italy). The performance of these devices, all bump-bonded with the ATLAS pixel FE-I3 read-out chip, is compared to that observed before irradiation in a previous beam test.

  10. New 3-D vision-sensor for shape-measurement applications

    NASA Astrophysics Data System (ADS)

    Moring, Ilkka; Myllyla, Risto A.; Honkanen, Esa; Kaisto, Ilkka P.; Kostamovaara, Juha T.; Maekynen, Anssi J.; Manninen, Markku

    1990-04-01

    In this paper we describe a new 3D-vision sensor developed in cooperation with the Technical Research Centre of Finland, the University of Oulu, and Prometrics Oy Co. The sensor is especially intended for the non-contact measurement of the shapes and dimensions of large industrial objects. It consists of a pulsed time-of-flight laser rangefinder, a target point detection system, a mechanical scanner, and a PC-based computer system. Our 3D-sensor has two operational modes: one for range image acquisition and the other for the search and measurement of single coordinate points. In the range image mode a scene is scanned and a 3D-image of the desired size is obtained. In the single point mode the sensor automatically searches for cooperative target points on the surface of an object and measures their 3D-coordinates. This mode can be used, e.g. for checking the dimensions of objects and for calibration. The results of preliminary performance tests are presented in the paper.

  11. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  12. Fiber optic vibration sensor for high-power electric machines realized using 3D printing technology

    NASA Astrophysics Data System (ADS)

    Igrec, Bojan; Bosiljevac, Marko; Sipus, Zvonimir; Babic, Dubravko; Rudan, Smiljko

    2016-03-01

    The objective of this work was to demonstrate a lightweight and inexpensive fiber-optic vibration sensor, built using 3D printing technology, for high-power electric machines and similar applications. The working principle is based on modulating the light intensity using a blade attached to a bendable membrane. The sensor prototype was manufactured using PolyJet Matrix technology with DM 8515 Grey 35 Polymer. The sensor shows linear response, expected bandwidth (< 150 Hz), and from our measurements we estimated the damping ratio for used polymer to be ζ ≍ 0.019. The developed prototype is simple to assemble, adjust, calibrate and repair.

  13. Retrieval of Vegetation Structural Parameters and 3-D Reconstruction of Forest Canopies Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.; Schaaf, C.; Woodcock, C. E.; Jupp, D. L.; Culvenor, D.; Newnham, G.; Lovell, J.

    2010-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud, the lidar also provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the full return waveform sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves, trunks, and branches. Deployments in New England in 2007 and the southern Sierra Nevada of California in 2008 tested the ability of the instrument to retrieve mean tree diameter, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. Parameters retrieved from five scans located within six 1-ha stand sites matched manually-measured parameters with values of R2 = 0.94-0.99 in New England and 0.92-0.95 in the Sierra Nevada. Retrieved leaf area index (LAI) values were similar to those of LAI-2000 and hemispherical photography. In New England, an analysis of variance showed that EVI-retrieved values were not significantly different from other methods (power = 0.84 or higher). In the Sierra, R2 = 0.96 and 0.81 for hemispherical photos and LAI-2000, respectively. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. New England stand heights, obtained from foliage profiles, were not significantly different (power = 0.91) from RH100 values observed by LVIS in 2003. Three-dimensional stand reconstruction identifies one or more “hits” along the pulse path coupled with the peak return of each hit expressed as apparent reflectance. Returns are classified as trunk, leaf, or ground returns based on the shape of the return pulse and its location. These data provide a point

  14. 3D MEMS sensor for application on earthquakes early detection and Nowcast

    NASA Astrophysics Data System (ADS)

    Wu, Jerry; Liang, Jing; Szu, Harold

    2016-05-01

    This paper presents a 3D Microelectromechanical systems (MEMS) sensor system to quickly and reliably identify the precursors that precede every earthquake. When a precursor is detected and is expected to be followed by a major earthquake, the sensor system will analyze and determine the magnitude of the earthquake. This newly proposed 3D MEMS sensor can provide P-waves, S-waves, and surface waves along with timing measurements to a data processing unit. The out coming data is processed and filtered continuously by a set of proposed built-in programmable Digital Signal Process (DSP) filters in order to remove noise and other disturbances and determine an earthquake pattern. Our goal is to reliably initiate an alarm before the arrival of the destructive waves. Keywords:

  15. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  16. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  17. Compressive full-waveform LIDAR with low-cost sensor

    NASA Astrophysics Data System (ADS)

    Yang, Weiyi; Ke, Jun

    2016-10-01

    Full-waveform LiDAR is a method that digitizes the complete waveform of backscattered pulses to obtain range information of multi-targets. To avoid expensive sensors in conventional full-waveform LiDAR system, a new system based on compressive sensing method is presented in this paper. The non-coherent continuous-wave laser is modulated by electro-optical modulator with pseudo-random sequences. A low-bandwidth detector and a low-bandwidth analog-digital converter are used to acquire the returned signal. OMP algorithm is employed to reconstruct the high resolution range information.

  18. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  19. A sensor skid for precise 3D modeling of production lines

    NASA Astrophysics Data System (ADS)

    Elseberg, J.; Borrmann, D.; Schauer, J.; Nüchter, A.; Koriath, D.; Rautenberg, U.

    2014-05-01

    Motivated by the increasing need of rapid characterization of environments in 3D, we designed and built a sensor skid that automates the work of an operator of terrestrial laser scanners. The system combines terrestrial laser scanning with kinematic laser scanning and uses a novel semi-rigid SLAMmethod. It enables us to digitize factory environments without the need to stop production. The acquired 3D point clouds are precise and suitable to detect objects that collide with items moved along the production line.

  20. Integration of camera and range sensors for 3D pose estimation in robot visual servoing

    NASA Astrophysics Data System (ADS)

    Hulls, Carol C. W.; Wilson, William J.

    1998-10-01

    Range-vision sensor systems can incorporate range images or single point measurements. Research incorporating point range measurements has focused on the area of map generation for mobile robots. These systems can utilize the fact that the objects sensed tend to be large and planar. The approach presented in this paper fuses information obtained from a point range measurement with visual information to produce estimates of the relative 3D position and orientation of a small, non-planar object with respect to a robot end- effector. The paper describes a real-time sensor fusion system for performing dynamic visual servoing using a camera and a point laser range sensor. The system is based upon the object model reference approach. This approach, which can be used to develop multi-sensor fusion systems that fuse dynamic sensor data from diverse sensors in real-time, uses a description of the object to be sensed in order to develop a combined observation-dependency sensor model. The range- vision sensor system is evaluated in terms of accuracy and robustness. The results show that the use of a range sensor significantly improves the system performance when there is poor or insufficient camera information. The system developed is suitable for visual servoing applications, particularly robot assembly operations.

  1. 3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology

    NASA Astrophysics Data System (ADS)

    Brodu, N.; Lague, D.

    2012-03-01

    result is given at each point, allowing the user to remove the points for which the classification is uncertain. The process can be both fully automated (minimal user input once, all scenes treated in large computation batches), but also fully customized by the user including a graphical definition of the classifiers if so desired. Working classifiers can be exchanged between users independently of the instrument used to acquire the data avoiding the need to go through full training of the classifier. Although developed for fully 3D data, the method can be readily applied to 2.5D airborne lidar data.

  2. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  3. Design and verification of diffractive optical elements for speckle generation of 3-D range sensors

    NASA Astrophysics Data System (ADS)

    Du, Pei-Qin; Shih, Hsi-Fu; Chen, Jenq-Shyong; Wang, Yi-Shiang

    2016-12-01

    The optical projection using speckles is one of the structured light methods that have been applied to three-dimensional (3-D) range sensors. This paper investigates the design and fabrication of diffractive optical elements (DOEs) for generating the light field with uniformly distributed speckles. Based on the principles of computer generated holograms, the iterative Fourier transform algorithm was adopted for the DOE design. It was used to calculate the phase map for diffracting the incident laser beam into a goal pattern with distributed speckles. Four patterns were designed in the study. Their phase maps were first examined by a spatial light modulator and then fabricated on glass substrates by microfabrication processes. Finally, the diffraction characteristics of the fabricated devices were verified. The experimental results show that the proposed methods are applicable to the DOE design of 3-D range sensors. Furthermore, any expected diffraction area and speckle density could be possibly achieved according to the relations presented in the paper.

  4. EEG-MRI co-registration and sensor labeling using a 3D laser scanner.

    PubMed

    Koessler, L; Cecchin, T; Caspary, O; Benhadid, A; Vespignani, H; Maillard, L

    2011-03-01

    This paper deals with the co-registration of an MRI scan with EEG sensors. We set out to evaluate the effectiveness of a 3D handheld laser scanner, a device that is not widely used for co-registration, applying a semi-automatic procedure that also labels EEG sensors. The scanner acquired the sensors' positions and the face shape, and the scalp mesh was obtained from the MRI scan. A pre-alignment step, using the position of three fiducial landmarks, provided an initial value for co-registration, and the sensors were automatically labeled. Co-registration was then performed using an iterative closest point algorithm applied to the face shape. The procedure was conducted on five subjects with two scans of EEG sensors and one MRI scan each. The mean time for the digitization of the 64 sensors and three landmarks was 53 s. The average scanning time for the face shape was 2 min 6 s for an average number of 5,263 points. The mean residual error of the sensors co-registration was 2.11 mm. These results suggest that the laser scanner associated with an efficient co-registration and sensor labeling algorithm is sufficiently accurate, fast and user-friendly for longitudinal and retrospective brain sources imaging studies.

  5. 3D handheld laser scanner based approach for automatic identification and localization of EEG sensors.

    PubMed

    Koessler, Laurent; Cecchin, Thierry; Ternisien, Eric; Maillard, Louis

    2010-01-01

    This paper describes and assesses for the first time the use of a handheld 3D laser scanner for scalp EEG sensor localization and co-registration with magnetic resonance images. Study on five subjects showed that the scanner had an equivalent accuracy, a better repeatability, and was faster than the reference electromagnetic digitizer. According to electrical source imaging, somatosensory evoked potentials experiments validated its ability to give precise sensor localization. With our automatic labeling method, the data provided by the scanner could be directly introduced in the source localization studies.

  6. Triboelectric Nanogenerators as a Self-Powered 3D Acceleration Sensor.

    PubMed

    Pang, Yao Kun; Li, Xiao Hui; Chen, Meng Xiao; Han, Chang Bao; Zhang, Chi; Wang, Zhong Lin

    2015-09-02

    A novel self-powered acceleration sensor based on triboelectric nanogenerator is proposed, which consists of an outer transparent shell and an inner mass-spring-damper mechanical system. The PTFE films on the mass surfaces can slide between two aluminum electrodes on an inner wall owing to the acceleration in the axis direction. On the basis of the coupling of triboelectric and electrostatic effects, the potential difference between the two aluminum electrodes is generated in proportion to the mass displacement, which can be used to characterize the acceleration in the axis direction with a detection range from about 13.0 to 40.0 m/s(2) at a sensitivity of 0.289 V·s(2)/m. With the integration of acceleration sensors in three axes, a self-powered 3D acceleration sensor is developed for vector acceleration measurement in any direction. The self-powered 3D acceleration sensor has excellent performance in the stability test, and the output voltages have a little decrease of ∼6% after 4000 cycles. Moreover, the self-powered acceleration sensor can be used to measure high collision acceleration, which has potential practicability in automobile security systems.

  7. A 3D Model of the Thermoelectric Microwave Power Sensor by MEMS Technology

    PubMed Central

    Yi, Zhenxiang; Liao, Xiaoping

    2016-01-01

    In this paper, a novel 3D model is proposed to describe the temperature distribution of the thermoelectric microwave power sensor. In this 3D model, the heat flux density decreases from the upper surface to the lower surface of the GaAs substrate while it was supposed to be a constant in the 2D model. The power sensor is fabricated by a GaAs monolithic microwave integrated circuit (MMIC) process and micro-electro-mechanical system (MEMS) technology. The microwave performance experiment shows that the S11 is less than −26 dB over the frequency band of 1–10 GHz. The power response experiment demonstrates that the output voltage increases from 0 mV to 27 mV, while the incident power varies from 1 mW to 100 mW. The measured sensitivity is about 0.27 mV/mW, and the calculated result from the 3D model is 0.28 mV/mW. The relative error has been reduced from 7.5% of the 2D model to 3.7% of the 3D model. PMID:27338395

  8. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  9. First production of new thin 3D sensors for HL-LHC at FBK

    NASA Astrophysics Data System (ADS)

    Sultan, D. M. S.; Dalla Betta, G.-F.; Mendicino, R.; Boscardin, M.; Ronchin, S.; Zorzi, N.

    2017-01-01

    Owing to their intrinsic (geometry dependent) radiation hardness, 3D pixel sensors are promising candidates for the innermost tracking layers of the forthcoming experiment upgrades at the "Phase 2" High-Luminosity LHC (HL-LHC) . To this purpose, extreme radiation hardness up to the expected maximum fluence of 2 × 1016 neq.cm-2 must come along with several technological improvements in a new generation of 3D pixels, i.e., increased pixel granularity (050×5 or 025× 10 μ m2 cell size), thinner active region (0~ 10 \\textmu m), narrower columnar electrodes (~ 5 \\textmu m diameter) with reduced inter-electrode spacing (0~ 3 μ m), and very slim edges (0~ 10 μ m). The fabrication of the first batch of these new 3D sensors was recently completed at FBK on Si-Si direct wafer bonded 6" substrates. Initial electrical test results, performed at wafer level on sensors and test structures, highlighted very promising performance, in good agreement with TCAD simulations: low leakage current (< 1 pA/column), intrinsic breakdown voltage of more than 150 V, capacitance of about 50 fF/column, thus assessing the validity of the design approach. A large variety of pixel sensors compatible with both existing (e.g., ATLAS FEI4 and CMS PSI46) and future (e.g., RD53) read-out chips were fabricated, that were also electrically tested on wafer using a temporary metal layer patterned as strips shorting rows of pixels together. This allowed a statistically significant distribution of the relevant electrical quantities to be obtained, thus gaining insight into the impact of process-induced defects. A few 3D strip test structures were irradiated with X-rays, showing inter-strip resistance of at least several GΩ even after 50 Mrad(Si) dose, thus proving the p-spray robustness. We present the most important design and technological aspects, and results obtained from the initial investigations.

  10. An approach for the calibration of a combined RGB-sensor and 3D-camera device

    NASA Astrophysics Data System (ADS)

    Schulze, M.

    2011-07-01

    The elds of application for 3d cameras are very dierent, because high image frequency and determination of 3d data. Often, 3d cameras are used for mobile robotic. They are used for obstacle detection or object recognition. So they also are interesting for applications in agriculture, in combination with mobile robots. Here, in addition to 3d data, there is often a necessity to get color information for each 3d point. Unfortunately, 3d cameras do not capture any color information. Therefore, an additional sensor is necessary, such as RGB plus possibly NIR. To combine data of two dierent sensors a reference to each other, via calibration, is important. This paper presents several calibration methods and discuss their accuracy potential. Based on a spatial resection, the algorithm determines the translation and rotation between the two sensors and the inner orientation of the used sensor.

  11. DLP/DSP-based optical 3D sensors for the mass market in industrial metrology and life sciences

    NASA Astrophysics Data System (ADS)

    Frankowski, G.; Hainich, R.

    2011-03-01

    GFM has developed and constructed DLP-based optical 3D measuring devices based on structured light illumination. Over the years the devices have been used in industrial metrology and life sciences for different 3D measuring tasks. This lecture will discuss integration of DLP Pico technology and DSP technology from Texas Instruments for mass market optical 3D sensors. In comparison to existing mass market laser triangulation sensors, the new 3D sensors provide a full-field measurement of up to a million points in less than a second. The lecture will further discuss different fields of application and advantages of the new generation of 3D sensors for: OEM application in industrial measuring and inspection; 3D metrology in industry, life sciences and biometrics, and industrial image processing.

  12. A 3D Chemically Modified Graphene Hydrogel for Fast, Highly Sensitive, and Selective Gas Sensor

    PubMed Central

    Wu, Jin; Tao, Kai; Guo, Yuanyuan; Li, Zhong; Wang, Xiaotian; Luo, Zhongzhen; Du, Chunlei; Chen, Di; Norford, Leslie K.

    2016-01-01

    Reduced graphene oxide (RGO) has proved to be a promising candidate in high‐performance gas sensing in ambient conditions. However, trace detection of different kinds of gases with simultaneously high sensitivity and selectivity is challenging. Here, a chemiresistor‐type sensor based on 3D sulfonated RGO hydrogel (S‐RGOH) is reported, which can detect a variety of important gases with high sensitivity, boosted selectivity, fast response, and good reversibility. The NaHSO3 functionalized RGOH displays remarkable 118.6 and 58.9 times higher responses to NO2 and NH3, respectively, compared with its unmodified RGOH counterpart. In addition, the S‐RGOH sensor is highly responsive to volatile organic compounds. More importantly, the characteristic patterns on the linearly fitted response–temperature curves are employed to distinguish various gases for the first time. The temperature of the sensor is elevated rapidly by an imbedded microheater with little power consumption. The 3D S‐RGOH is characterized and the sensing mechanisms are proposed. This work gains new insights into boosting the sensitivity of detecting various gases by combining chemical modification and 3D structural engineering of RGO, and improving the selectivity of gas sensing by employing temperature dependent response characteristics of RGO for different gases. PMID:28331786

  13. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  14. Dynamic 3-D chemical agent cloud mapping using a sensor constellation deployed on mobile platforms

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Konno, Daisei; Rossi, David; Marinelli, William J.; Seem, Pete

    2014-05-01

    The need for standoff detection technology to provide early Chem-Bio (CB) threat warning is well documented. Much of the information obtained by a single passive sensor is limited to bearing and angular extent of the threat cloud. In order to obtain absolute geo-location, range to threat, 3-D extent and detailed composition of the chemical threat, fusion of information from multiple passive sensors is needed. A capability that provides on-the-move chemical cloud characterization is key to the development of real-time Battlespace Awareness. We have developed, implemented and tested algorithms and hardware to perform the fusion of information obtained from two mobile LWIR passive hyperspectral sensors. The implementation of the capability is driven by current Nuclear, Biological and Chemical Reconnaissance Vehicle operational tactics and represents a mission focused alternative of the already demonstrated 5-sensor static Range Test Validation System (RTVS).1 The new capability consists of hardware for sensor pointing and attitude information which is made available for streaming and aggregation as part of the data fusion process for threat characterization. Cloud information is generated using 2-sensor data ingested into a suite of triangulation and tomographic reconstruction algorithms. The approaches are amenable to using a limited number of viewing projections and unfavorable sensor geometries resulting from mobile operation. In this paper we describe the system architecture and present an analysis of results obtained during the initial testing of the system at Dugway Proving Ground during BioWeek 2013.

  15. Advancing Lidar Sensors Technologies for Next Generation Landing Missions

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Hines, Glenn D.; Roback, Vincent E.; Petway, Larry B.; Barnes, Bruce W.; Brewster, Paul F.; Pierrottet, Diego F.; Bulyshev, Alexander

    2015-01-01

    Missions to solar systems bodies must meet increasingly ambitious objectives requiring highly reliable "precision landing", and "hazard avoidance" capabilities. Robotic missions to the Moon and Mars demand landing at pre-designated sites of high scientific value near hazardous terrain features, such as escarpments, craters, slopes, and rocks. Missions aimed at paving the path for colonization of the Moon and human landing on Mars need to execute onboard hazard detection and precision maneuvering to ensure safe landing near previously deployed assets. Asteroid missions require precision rendezvous, identification of the landing or sampling site location, and navigation to the highly dynamic object that may be tumbling at a fast rate. To meet these needs, NASA Langley Research Center (LaRC) has developed a set of advanced lidar sensors under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. These lidar sensors can provide precision measurement of vehicle relative proximity, velocity, and orientation, and high resolution elevation maps of the surface during the descent to the targeted body. Recent flights onboard Morpheus free-flyer vehicle have demonstrated the viability of ALHAT lidar sensors for future landing missions to solar system bodies.

  16. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  17. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  18. Multi-sourced, 3D geometric characterization of volcanogenic karst features: Integrating lidar, sonar, and geophysical datasets (Invited)

    NASA Astrophysics Data System (ADS)

    Sharp, J. M.; Gary, M. O.; Reyes, R.; Halihan, T.; Fairfield, N.; Stone, W. C.

    2009-12-01

    Karstic aquifers can form very complex hydrogeological systems and 3-D mapping has been difficult, but Lidar, phased array sonar, and improved earth resistivity techniques show promise in this and in linking metadata to models. Zacatón, perhaps the Earth’s deepest cenote, has a sub-aquatic void space exceeding 7.5 x 106 cubic m3. It is the focus of this study which has created detailed 3D maps of the system. These maps include data from above and beneath the the water table and within the rock matrix to document the extent of the immense karst features and to interpret the geologic processes that formed them. Phase 1 used high resolution (20 mm) Lidar scanning of surficial features of four large cenotes. Scan locations, selected to achieve full feature coverage once registered, were established atop surface benchmarks with UTM coordinates established using GPS and Total Stations. The combined datasets form a geo-registered mesh of surface features down to water level in the cenotes. Phase 2 conducted subsurface imaging using Earth Resistivity Imaging (ERI) geophysics. ERI identified void spaces isolated from open flow conduits. A unique travertine morphology exists in which some cenotes are dry or contain shallow lakes with flat travertine floors; some water-filled cenotes have flat floors without the cone of collapse material; and some have collapse cones. We hypothesize that the floors may have large water-filled voids beneath them. Three separate flat travertine caps were imaged: 1) La Pilita, which is partially open, exposing cap structure over a deep water-filled shaft; 2) Poza Seca, which is dry and vegetated; and 3) Tule, which contains a shallow (<1 m) lake. A fourth line was run adjacent to cenote Verde. La Pilita ERI, verified by SCUBA, documented the existence of large water-filled void zones ERI at Poza Seca showed a thin cap overlying a conductive zone extending to at least 25 m depth beneath the cap with no lower boundary of this zone evident

  19. Constraints on 3D fault and fracture distribution in layered volcanic- volcaniclastic sequences from terrestrial LIDAR datasets: Faroe Islands

    NASA Astrophysics Data System (ADS)

    Raithatha, Bansri; McCaffrey, Kenneth; Walker, Richard; Brown, Richard; Pickering, Giles

    2013-04-01

    Hydrocarbon reservoirs commonly contain an array of fine-scale structures that control fluid flow in the subsurface, such as polyphase fracture networks and small-scale fault zones. These structures are unresolvable using seismic imaging and therefore outcrop-based studies have been used as analogues to characterize fault and fracture networks and assess their impact on fluid flow in the subsurface. To maximize recovery and enhance production, it is essential to understand the geometry, physical properties, and distribution of these structures in 3D. Here we present field data and terrestrial LIDAR-derived 3D, photo-realistic virtual outcrops of fault zones at a range of displacement scales (0.001- 4.5 m) within a volcaniclastic sand- and basaltic lava unit sequence in the Faroe Islands. Detailed field observations were used to constrain the virtual outcrop dataset, and a workflow has been developed to build a discrete fracture network (DFN) models in GOCAD® from these datasets. Model construction involves three main stages: (1) Georeferencing and processing of LIDAR datasets; (2) Structural interpretation to discriminate between faults, fractures, veins, and joint planes using CAD software and RiSCAN Pro; and (3) Building a 3D DFN in GOCAD®. To test the validity of this workflow, we focus here on a 4.5 m displacement strike-slip fault zone that displays a complex polymodal fracture network in the inter-layered basalt-volcaniclastic sequence, which is well-constrained by field study. The DFN models support our initial field-based hypothesis that fault zone geometry varies with increasing displacement through volcaniclastic units. Fracture concentration appears to be greatest in the upper lava unit, decreases into the volcaniclastic sediments, and decreases further into the lower lava unit. This distribution of fractures appears to be related to the width of the fault zone and the amount of fault damage on the outcrop. For instance, the fault zone is thicker in

  20. Package analysis of 3D-printed piezoresistive strain gauge sensors

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Baptist, Joshua R.; Sahasrabuddhe, Ritvij; Lee, Woo H.; Popa, Dan O.

    2016-05-01

    Poly(3,4-ethyle- nedioxythiophene)-poly(styrenesulfonate) or PEDOT:PSS is a flexible polymer which exhibits piezo-resistive properties when subjected to structural deformation. PEDOT:PSS has a high conductivity and thermal stability which makes it an ideal candidate for use as a pressure sensor. Applications of this technology includes whole body robot skin that can increase the safety and physical collaboration of robots in close proximity to humans. In this paper, we present a finite element model of strain gauge touch sensors which have been 3D-printed onto Kapton and silicone substrates using Electro-Hydro-Dynamic ink-jetting. Simulations of the piezoresistive and structural model for the entire packaged sensor was carried out using COMSOLR , and compared with experimental results for validation. The model will be useful in designing future robot skin with predictable performances.

  1. Nonthreshold-based event detection for 3d environment monitoring in sensor networks

    SciTech Connect

    Li, M.; Liu, Y.H.; Chen, L.

    2008-12-15

    Event detection is a crucial task for wireless sensor network applications, especially environment monitoring. Existing approaches for event detection are mainly based on some predefined threshold values and, thus, are often inaccurate and incapable of capturing complex events. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds but some complex pattern in the full-scale view of the environmental data. To address this issue, we propose a nonthreshold-based approach for the real 3D sensor monitoring environment. We employ energy-efficient methods to collect a time series of data maps from the sensor network and detect complex events through matching the gathered data to spatiotemporal data patterns. Finally, we conduct trace-driven simulations to prove the efficacy and efficiency of this approach on detecting events of complex phenomena from real-life records.

  2. B4 2 After, 3D Deformation Field From Matching Pre- To Post-Event Aerial LiDAR Point Clouds, The 2010 El Mayor-Cucapah M7.2 Earthquake Case

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Limon-Tirado, J. F.; Arrowsmith, R.; Krishnan, A.; Saripalli, S.; Oskin, M. E.; Glennie, C. L.; Arregui, S. M.; Fletcher, J. M.; Teran, O. J.

    2013-05-01

    horizontal having the latter problems in flat areas as expected. Hybrid approaches, as simple differencing, could be taken in these areas. Outliers were removed from results. ICP detected extraction from quarries developed between the two dates of LiDAR collection and expressed as a negative vertical displacement close to the sites. To improve the accuracy of the 3D displacement field, we intend to reprocess the pre-event source survey data to reduce the systematic error introduced by the sensor. Multidisciplinary approach will be needed to make tectonic inferences from the 3D displacement field revealed by ICP, about the processes at depth expressed at surface.

  3. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR.

    PubMed

    Pham, Duy Duong; Suh, Young Soo

    2016-01-19

    Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people.

  4. Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR

    PubMed Central

    Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people. PMID:26797619

  5. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks

    PubMed Central

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-01-01

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency. PMID:28208735

  6. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks.

    PubMed

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-02-08

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency.

  7. Modelling Sensor and Target effects on LiDAR Waveforms

    NASA Astrophysics Data System (ADS)

    Rosette, J.; North, P. R.; Rubio, J.; Cook, B. D.; Suárez, J.

    2010-12-01

    The aim of this research is to explore the influence of sensor characteristics and interactions with vegetation and terrain properties on the estimation of vegetation parameters from LiDAR waveforms. This is carried out using waveform simulations produced by the FLIGHT radiative transfer model which is based on Monte Carlo simulation of photon transport (North, 1996; North et al., 2010). The opportunities for vegetation analysis that are offered by LiDAR modelling are also demonstrated by other authors e.g. Sun and Ranson, 2000; Ni-Meister et al., 2001. Simulations from the FLIGHT model were driven using reflectance and transmittance properties collected from the Howland Research Forest, Maine, USA in 2003 together with a tree list for a 200m x 150m area. This was generated using field measurements of location, species and diameter at breast height. Tree height and crown dimensions of individual trees were calculated using relationships established with a competition index determined for this site. Waveforms obtained by the Laser Vegetation Imaging Sensor (LVIS) were used as validation of simulations. This provided a base from which factors such as slope, laser incidence angle and pulse width could be varied. This has enabled the effect of instrument design and laser interactions with different surface characteristics to be tested. As such, waveform simulation is relevant for the development of future satellite LiDAR sensors, such as NASA’s forthcoming DESDynI mission (NASA, 2010), which aim to improve capabilities of vegetation parameter estimation. ACKNOWLEDGMENTS We would like to thank scientists at the Biospheric Sciences Branch of NASA Goddard Space Flight Center, in particular to Jon Ranson and Bryan Blair. This work forms part of research funded by the NASA DESDynI project and the UK Natural Environment Research Council (NE/F021437/1). REFERENCES NASA, 2010, DESDynI: Deformation, Ecosystem Structure and Dynamics of Ice. http

  8. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  9. Ag Nanoparticles-Modified 3D Graphene Foam for Binder-Free Electrodes of Electrochemical Sensors

    PubMed Central

    Han, Tao; Jin, Jianli; Wang, Congxu; Sun, Youyi; Zhang, Yinghe; Liu, Yaqing

    2017-01-01

    Ag nanoparticles-modified 3D graphene foam was synthesized through a one-step in-situ approach and then directly applied as the electrode of an electrochemical sensor. The composite foam electrode exhibited electrocatalytic activity towards Hg(II) oxidation with high limit of detection and sensitivity of 0.11 µM and 8.0 µA/µM, respectively. Moreover, the composite foam electrode for the sensor exhibited high cycling stability, long-term durability and reproducibility. These results were attributed to the unique porous structure of the composite foam electrode, which enabled the surface of Ag nanoparticles modified reduced graphene oxide (Ag NPs modified rGO) foam to become highly accessible to the metal ion and provided more void volume for the reaction with metal ion. This work not only proved that the composite foam has great potential application in heavy metal ions sensors, but also provided a facile method of gram scale synthesis 3D electrode materials based on rGO foam and other electrical active materials for various applications. PMID:28336878

  10. Ag Nanoparticles-Modified 3D Graphene Foam for Binder-Free Electrodes of Electrochemical Sensors.

    PubMed

    Han, Tao; Jin, Jianli; Wang, Congxu; Sun, Youyi; Zhang, Yinghe; Liu, Yaqing

    2017-02-16

    Ag nanoparticles-modified 3D graphene foam was synthesized through a one-step in-situ approach and then directly applied as the electrode of an electrochemical sensor. The composite foam electrode exhibited electrocatalytic activity towards Hg(II) oxidation with high limit of detection and sensitivity of 0.11 μM and 8.0 μA/μM, respectively. Moreover, the composite foam electrode for the sensor exhibited high cycling stability, long-term durability and reproducibility. These results were attributed to the unique porous structure of the composite foam electrode, which enabled the surface of Ag nanoparticles modified reduced graphene oxide (Ag NPs modified rGO) foam to become highly accessible to the metal ion and provided more void volume for the reaction with metal ion. This work not only proved that the composite foam has great potential application in heavy metal ions sensors, but also provided a facile method of gram scale synthesis 3D electrode materials based on rGO foam and other electrical active materials for various applications.

  11. Research on Joint Parameter Inversion for an Integrated Underground Displacement 3D Measuring Sensor

    PubMed Central

    Shentu, Nanying; Qiu, Guohua; Li, Qing; Tong, Renyuan; Shentu, Nankai; Wang, Yanjie

    2015-01-01

    Underground displacement monitoring is a key means to monitor and evaluate geological disasters and geotechnical projects. There exist few practical instruments able to monitor subsurface horizontal and vertical displacements simultaneously due to monitoring invisibility and complexity. A novel underground displacement 3D measuring sensor had been proposed in our previous studies, and great efforts have been taken in the basic theoretical research of underground displacement sensing and measuring characteristics by virtue of modeling, simulation and experiments. This paper presents an innovative underground displacement joint inversion method by mixing a specific forward modeling approach with an approximate optimization inversion procedure. It can realize a joint inversion of underground horizontal displacement and vertical displacement for the proposed 3D sensor. Comparative studies have been conducted between the measured and inversed parameters of underground horizontal and vertical displacements under a variety of experimental and inverse conditions. The results showed that when experimentally measured horizontal displacements and vertical displacements are both varied within 0 ~ 30 mm, horizontal displacement and vertical displacement inversion discrepancies are generally less than 3 mm and 1 mm, respectively, under three kinds of simulated underground displacement monitoring circumstances. This implies that our proposed underground displacement joint inversion method is robust and efficient to predict the measuring values of underground horizontal and vertical displacements for the proposed sensor. PMID:25871714

  12. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  13. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Astrophysics Data System (ADS)

    Nandhakumar, N.; Smith, Philip W.

    1993-12-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  14. Design of a 3D-IC multi-resolution digital pixel sensor

    NASA Astrophysics Data System (ADS)

    Brochard, N.; Nebhen, J.; Dubois, J.; Ginhac, D.

    2016-04-01

    This paper presents a digital pixel sensor (DPS) integrating a sigma-delta analog-to-digital converter (ADC) at pixel level. The digital pixel includes a photodiode, a delta-sigma modulation and a digital decimation filter. It features adaptive dynamic range and multiple resolutions (up to 10-bit) with a high linearity. A specific row decoder and column decoder are also designed to permit to read a specific pixel chosen in the matrix and its neighborhood of 4 x 4. Finally, a complete design with the CMOS 130 nm 3D-IC FaStack Tezzaron technology is also described, revealing a high fill-factor of about 80%.

  15. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  16. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  17. Flying triangulation--an optical 3D sensor for the motion-robust acquisition of complex objects.

    PubMed

    Ettl, Svenja; Arold, Oliver; Yang, Zheng; Häusler, Gerd

    2012-01-10

    Three-dimensional (3D) shape acquisition is difficult if an all-around measurement of an object is desired or if a relative motion between object and sensor is unavoidable. An optical sensor principle is presented-we call it "flying triangulation"-that enables a motion-robust acquisition of 3D surface topography. It combines a simple handheld sensor with sophisticated registration algorithms. An easy acquisition of complex objects is possible-just by freely hand-guiding the sensor around the object. Real-time feedback of the sequential measurement results enables a comfortable handling for the user. No tracking is necessary. In contrast to most other eligible sensors, the presented sensor generates 3D data from each single camera image.

  18. 3D modeling and characterization of a calorimetric flow rate sensor for sweat rate sensing applications

    NASA Astrophysics Data System (ADS)

    Iftekhar, Ahmed Tashfin; Ho, Jenny Che-Ting; Mellinger, Axel; Kaya, Tolga

    2017-03-01

    Sweat-based physiological monitoring has been intensively explored in the last decade with the hopes of developing real-time hydration monitoring devices. Although the content of sweat (electrolytes, lactate, urea, etc.) provides significant information about the physiology, it is also very important to know the rate of sweat at the time of sweat content measurements because the sweat rate is known to alter the concentrations of sweat compounds. We developed a calorimetric based flow rate sensor using PolydimethylSiloxane that is suitable for sweat rate applications. Our simple approach on using temperature-based flow rate detection can easily be adapted to multiple sweat collection and analysis devices. Moreover, we have developed a 3D finite element analysis model of the device using COMSOL Multiphysics™ and verified the flow rate measurements. The experiment investigated flow rate values from 0.3 μl/min up to 2.1 ml/min, which covers the human sweat rate range (0.5 μl/min-10 μl/min). The 3D model simulations and analytical model calculations covered an even wider range in order to understand the main physical mechanisms of the device. With a verified 3D model, different environmental heat conditions could be further studied to shed light on the physiology of the sweat rate.

  19. 3D imaging for ballistics analysis using chromatic white light sensor

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Hildebrandt, Mario; Dittmann, Jana; Clausing, Eric; Fischer, Robert; Vielhauer, Claus

    2012-03-01

    The novel application of sensing technology, based on chromatic white light (CWL), gives a new insight into ballistic analysis of cartridge cases. The CWL sensor uses a beam of white light to acquire highly detailed topography and luminance data simultaneously. The proposed 3D imaging system combines advantages of 3D and 2D image processing algorithms in order to automate the extraction of firearm specific toolmarks shaped on fired specimens. The most important characteristics of a fired cartridge case are the type of the breech face marking as well as size, shape and location of extractor, ejector and firing pin marks. The feature extraction algorithm normalizes the casing surface and consistently searches for the appropriate distortions on the rim and on the primer. The location of the firing pin mark in relation to the lateral scratches on the rim provides unique rotation invariant characteristics of the firearm mechanisms. Additional characteristics are the volume and shape of the firing pin mark. The experimental evaluation relies on the data set of 15 cartridge cases fired from three 9mm firearms of different manufacturers. The results show very high potential of 3D imaging systems for casing-based computer-aided firearm identification, which is prospectively going to support human expertise.

  20. Fast 3D modeling in complex environments using a single Kinect sensor

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Liu, Jingmeng

    2014-02-01

    Three-dimensional (3D) modeling technology has been widely used in inverse engineering, urban planning, robot navigation, and many other applications. How to build a dense model of the environment with limited processing resources is still a challenging topic. A fast 3D modeling algorithm that only uses a single Kinect sensor is proposed in this paper. For every color image captured by Kinect, corner feature extraction is carried out first. Then a spiral search strategy is utilized to select the region of interest (ROI) that contains enough feature corners. Next, the iterative closest point (ICP) method is applied to the points in the ROI to align consecutive data frames. Finally, the analysis of which areas can be walked through by human beings is presented. Comparative experiments with the well-known KinectFusion algorithm have been done and the results demonstrate that the accuracy of the proposed algorithm is the same as KinectFusion but the computing speed is nearly twice of KinectFusion. 3D modeling of two scenes of a public garden and traversable areas analysis in these regions further verified the feasibility of our algorithm.

  1. A Novel 3D Multilateration Sensor Using Distributed Ultrasonic Beacons for Indoor Navigation

    PubMed Central

    Kapoor, Rohan; Ramasamy, Subramanian; Gardi, Alessandro; Bieber, Chad; Silverberg, Larry; Sabatini, Roberto

    2016-01-01

    Navigation and guidance systems are a critical part of any autonomous vehicle. In this paper, a novel sensor grid using 40 KHz ultrasonic transmitters is presented for adoption in indoor 3D positioning applications. In the proposed technique, a vehicle measures the arrival time of incoming ultrasonic signals and calculates the position without broadcasting to the grid. This system allows for conducting silent or covert operations and can also be used for the simultaneous navigation of a large number of vehicles. The transmitters and receivers employed are first described. Transmission lobe patterns and receiver directionality determine the geometry of transmitter clusters. Range and accuracy of measurements dictate the number of sensors required to navigate in a given volume. Laboratory experiments were performed in which a small array of transmitters was set up and the sensor system was tested for position accuracy. The prototype system is shown to have a 1-sigma position error of about 16 cm, with errors between 7 and 11 cm in the local horizontal coordinates. This research work provides foundations for the future development of ultrasonic navigation sensors for a variety of autonomous vehicle applications. PMID:27740604

  2. Creation of 3D multi-body orthodontic models by using independent imaging sensors.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-02-05

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  3. 3D customized and flexible tactile sensor using a piezoelectric nanofiber mat and sandwich-molded elastomer sheets

    NASA Astrophysics Data System (ADS)

    Bit Lee, Han; Kim, Young Won; Yoon, Jonghun; Lee, Nak Kyu; Park, Suk-Hee

    2017-04-01

    We developed a skin-conformal flexible sensor in which three-dimensional (3D) free-form elastomeric sheets were harmoniously integrated with a piezoelectric nanofiber mat. The elastomeric sheets were produced by polydimethylsiloxane (PDMS) molding via using a 3D printed mold assembly, which was adaptively designed from 3D scanned skin surface geometry. The mold assembly, fabricated using a multi-material 3D printer, was composed of a pair of upper/lower mold parts and an interconnecting hinge, with material properties are characterized by different flexibilities. As a result of appropriate deformabilites of the upper mold part and hinge, the skin-conformal PDMS structures were successfully sandwich molded and demolded with good repeatability. An electrospun poly(vinylidene fluoride trifluoroethylene) nanofiber mat was prepared as the piezoelectric active layer and integrated with the 3D elastomeric parts. We confirmed that the highly responsive sensing performances of the 3D integrated sensor were identical to those of a flat sensor in terms of sensitivity and the linearity of the input–output relationship. The close 3D conformal skin contact of the flexible sensor enabled discernable perception of various scales of physical stimuli, such as tactile force and even minute skin deformation caused by the tester’s pulse. Collectively from the 3D scanning design to the practical application, our achievements can potentially meet the needs of tailored human interfaces in the field of wearable devices and human-like robots.

  4. Design and Sensitivity Analysis Simulation of a Novel 3D Force Sensor Based on a Parallel Mechanism

    PubMed Central

    Yang, Eileen Chih-Ying

    2016-01-01

    Automated force measurement is one of the most important technologies in realizing intelligent automation systems. However, while many methods are available for micro-force sensing, measuring large three-dimensional (3D) forces and loads remains a significant challenge. Accordingly, the present study proposes a novel 3D force sensor based on a parallel mechanism. The transformation function and sensitivity index of the proposed sensor are analytically derived. The simulation results show that the sensor has a larger effective measuring capability than traditional force sensors. Moreover, the sensor has a greater measurement sensitivity for horizontal forces than for vertical forces over most of the measurable force region. In other words, compared to traditional force sensors, the proposed sensor is more sensitive to shear forces than normal forces. PMID:27999246

  5. A High-Resolution 3D Weather Radar, MSG, and Lightning Sensor Observation Composite

    NASA Astrophysics Data System (ADS)

    Diederich, Malte; Senf, Fabian; Wapler, Kathrin; Simmer, Clemens

    2013-04-01

    Within the research group 'Object-based Analysis and SEamless prediction' (OASE) of the Hans Ertel Centre for Weather Research programme (HerZ), a data composite containing weather radar, lightning sensor, and Meteosat Second Generation observations is being developed for the use in object-based weather analysis and nowcasting. At present, a 3D merging scheme combines measurements of the Bonn and Jülich dual polarimetric weather radar systems (data provided by the TR32 and TERENO projects) into a 3-dimensional polar-stereographic volume grid, with 500 meters horizontal, and 250 meters vertical resolution. The merging takes into account and compensates for various observational error sources, such as attenuation through hydrometeors, beam blockage through topography and buildings, minimum detectable signal as a function of noise threshold, non-hydrometeor echos like insects, and interference from other radar systems. In addition to this, the effect of convection during the radar 5-minute volume scan pattern is mitigated through calculation of advection vectors from subsequent scans and their use for advection correction when projecting the measurements into space for any desired timestamp. The Meteosat Second Generation rapid scan service provides a scan in 12 spectral visual and infrared wavelengths every 5 minutes over Germany and Europe. These scans, together with the derived microphysical cloud parameters, are projected into the same polar stereographic grid used for the radar data. Lightning counts from the LINET lightning sensor network are also provided for every 2D grid pixel. The combined 3D radar and 2D MSG/LINET data is stored in a fully documented netCDF file for every 5 minute interval, and is made ready for tracking and object based weather analysis. At the moment, the 3D data only covers the Bonn and Jülich area, but the algorithms are planed to be adapted to the newly conceived DWD polarimetric C-Band 5 minute interval volume scan strategy. An

  6. Automated Sensor for 3-D Reconstruction of Optical Emission from RF Plasmas

    NASA Astrophysics Data System (ADS)

    Collard, Corey; Shannon, S.; Brake, M. L.; Holloway, James Paul

    1999-10-01

    Three dimensional images are obtained by using an automated scanning sensor which collects optical emission from a RF (13.56 MHz) discharge in a capacitively coupled GEC cell. The sensor scans a plane parallel to the electrode surface and transmits the plasma spectral emission through a fiber optic cable to a monochromator. The fiber optic is attached to a motorized rotational stage attached to a manual vertical translational stage. Wedges of light (argon at 750.4 nm) are collected as the fiber scans across the plasma. The data is digitized and stored so that it can be input into an algorithm, which uses a Tikhonov regularization method to reconstruct the emissivity as a function of radial position. By varying the height of the sensor, a 3-D plot of the plasma emission can be obtained. Three dimensional plots of plasmas run at 75, 100, 150 and 200 peak to peak voltage at pressures of 100, 250, 500 and 1000 mTorr were obtained. The non-uniformity of the light emission as a function of pressure and power will be discussed.

  7. 3D silicon sensors: Design, large area production and quality assurance for the ATLAS IBL pixel detector upgrade

    NASA Astrophysics Data System (ADS)

    Da Via, Cinzia; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Darbo, Giovanni; Fleta, Celeste; Gemme, Claudia; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Chris; Kok, Angela; Parker, Sherwood; Pellegrini, Giulio; Vianello, Elisa; Zorzi, Nicola

    2012-12-01

    3D silicon sensors, where electrodes penetrate the silicon substrate fully or partially, have successfully been fabricated in different processing facilities in Europe and USA. The key to 3D fabrication is the use of plasma micro-machining to etch narrow deep vertical openings allowing dopants to be diffused in and form electrodes of pin junctions. Similar openings can be used at the sensor's edge to reduce the perimeter's dead volume to as low as ˜4 μm. Since 2009 four industrial partners of the 3D ATLAS R&D Collaboration started a joint effort aimed at one common design and compatible processing strategy for the production of 3D sensors for the LHC Upgrade and in particular for the ATLAS pixel Insertable B-Layer (IBL). In this project, aimed for installation in 2013, a new layer will be inserted as close as 3.4 cm from the proton beams inside the existing pixel layers of the ATLAS experiment. The detector proximity to the interaction point will therefore require new radiation hard technologies for both sensors and front end electronics. The latter, called FE-I4, is processed at IBM and is the biggest front end of this kind ever designed with a surface of ˜4 cm2. The performance of 3D devices from several wafers was evaluated before and after bump-bonding. Key design aspects, device fabrication plans and quality assurance tests during the 3D sensors prototyping phase are discussed in this paper.

  8. Quality Assessment of 3d Reconstruction Using Fisheye and Perspective Sensors

    NASA Astrophysics Data System (ADS)

    Strecha, C.; Zoller, R.; Rutishauser, S.; Brot, B.; Schneider-Zapp, K.; Chovancova, V.; Krull, M.; Glassey, L.

    2015-03-01

    Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.

  9. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  10. The valuable use of Microsoft Kinect™ sensor 3D kinematic in the rehabilitation process in basketball

    NASA Astrophysics Data System (ADS)

    Braidot, Ariel; Favaretto, Guillermo; Frisoli, Melisa; Gemignani, Diego; Gumpel, Gustavo; Massuh, Roberto; Rayan, Josefina; Turin, Matías

    2016-04-01

    Subjects who practice sports either as professionals or amateurs, have a high incidence of knee injuries. There are a few publications that show studies from a kinematic point of view of lateral-structure-knee injuries, including meniscal (meniscal tears or chondral injury), without anterior cruciate ligament rupture. The use of standard motion capture systems for measuring outdoors sport is hard to implement due to many operative reasons. Recently released, the Microsoft Kinect™ is a sensor that was developed to track movements for gaming purposes and has seen an increased use in clinical applications. The fact that this device is a simple and portable tool allows the acquisition of data of sport common movements in the field. The development and testing of a set of protocols for 3D kinematic measurement using the Microsoft Kinect™ system is presented in this paper. The 3D kinematic evaluation algorithms were developed from information available and with the use of Microsoft’s Software Development Kit 1.8 (SDK). Along with this, an algorithm for calculating the lower limb joints angles was implemented. Thirty healthy adult volunteers were measured, using five different recording protocols for sport characteristic gestures which involve high knee injury risk in athletes.

  11. In-home hierarchical posture classification with a time-of-flight 3D sensor.

    PubMed

    Diraco, Giovanni; Leone, Alessandro; Siciliano, Pietro

    2014-01-01

    A non-invasive technique for posture classification suitable to be used in several in-home scenarios is proposed and preliminary validation results are presented. 3D point cloud sequences were acquired using a single time-of-flight sensor working in a privacy preserving modality and they were processed with a low power embedded PC. In order to satisfy different application requirements (e.g. covered distance range, processing speed and discrimination capabilities), a twofold discrimination approach was investigated in which features were hierarchically arranged from coarse to fine by exploiting both topological and volumetric representations. The topological representation encoded the intrinsic topology of the body's shape using a skeleton-based structure, thus guaranteeing invariance to scale, rotations and postural changes and achieving a high level of detail with a moderate computational cost. On the other hand, using the volumetric representation features were described in terms of 3D cylindrical histograms working within a wider range of distances in a faster way and also guaranteeing good invariance properties. The discrimination capabilities were evaluated in four different real-home scenarios related with the fields of ambient assisted living and homecare, namely "dangerous event detection", "anomalous behaviour detection", "activities recognition" and "natural human-ambient interaction". For each mentioned scenario, the discrimination capabilities were evaluated in terms of invariance to viewpoint changes, representation capabilities and classification performance, achieving promising results. The two feature representation approaches exhibited complementary characteristics showing high reliability with classification rates greater than 97%.

  12. Evaluation of the Kinect™ sensor for 3-D kinematic measurement in the workplace.

    PubMed

    Dutta, Tilak

    2012-07-01

    Recording posture and movement is important for determining risk of musculoskeletal injury in the workplace, but existing motion capture systems are not suited for field work. Estimates of the 3-D relative positions of four 0.10 m cubes from the Kinect were compared to estimates from a Vicon motion capture system to determine whether the hardware sensing components were sensitive enough to be used as a portable 3-D motion capture system for workplace ergonomic assessments. The root-mean-squared errors (SD) were 0.0065 m (0.0048 m), 0.0109 m (0.0059 m), 0.0057 m (0.0042 m) in the x, y and z directions (with x axis to the right, y axis away from the sensor and z axis upwards). These data were collected over a range of 1.0-3.0m from the device covering a field of view of 54.0 degrees horizontally and 39.1 degrees vertically. Requirements for software, hardware and subject preparation were also considered to determine the usability of the Kinect in the field.

  13. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  14. MBE based HgCdTe APDs and 3D LADAR sensors

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Asbrock, Jim; Bailey, Steven; Baley, Diane; Chapman, George; Crawford, Gina; Drafahl, Betsy; Herrin, Eileen; Kvaas, Robert; McKeag, William; Randall, Valerie; De Lyon, Terry; Hunter, Andy; Jensen, John; Roberts, Tom; Trotta, Patrick; Cook, T. Dean

    2007-04-01

    Raytheon is developing HgCdTe APD arrays and sensor chip assemblies (SCAs) for scanning and staring LADAR systems. The nonlinear characteristics of APDs operating in moderate gain mode place severe requirements on layer thickness and doping uniformity as well as defect density. MBE based HgCdTe APD arrays, engineered for high performance, meet the stringent requirements of low defects, excellent uniformity and reproducibility. In situ controls for alloy composition and substrate temperature have been implemented at HRL, LLC and Raytheon Vision Systems and enable consistent run to run results. The novel epitaxial designed using separate absorption-multiplication (SAM) architectures enables the realization of the unique advantages of HgCdTe including: tunable wavelength, low-noise, high-fill factor, low-crosstalk, and ambient operation. Focal planes built by integrating MBE detectors arrays processed in a 2 x 128 format have been integrated with 2 x 128 scanning ROIC designed. The ROIC reports both range and intensity and can detect multiple laser returns with each pixel autonomously reporting the return. FPAs show exceptionally good bias uniformity <1% at an average gain of 10. Recent breakthrough in device design has resulted in APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidth. 3D LADAR sensors utilizing these FPAs have been integrated and demonstrated both at Raytheon Missile Systems and Naval Air Warfare Center Weapons Division at China Lake. Excellent spatial and range resolution has been achieved with 3D imagery demonstrated both at short range and long range. Ongoing development under an Air Force Sponsored MANTECH program of high performance HgCdTe MBE APDs grown on large silicon wafers promise significant FPA cost reduction both by increasing the number of arrays on a given wafer and enabling automated processing.

  15. a Comparison among Different Optimization Levels in 3d Multi-Sensor Models. a Test Case in Emergency Context: 2016 Italian Earthquake

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2017-02-01

    In sudden emergency contexts that affect urban centres and built heritage, the latest Geomatics technique solutions must enable the demands of damage documentation, risk assessment, management and data sharing as efficiently as possible, in relation to the danger condition, to the accessibility constraints of areas and to the tight deadlines needs. In recent times, Unmanned Vehicle System (UAV) equipped with cameras are more and more involved in aerial survey and reconnaissance missions, and they are behaving in a very cost-effective way in the direction of 3D documentation and preliminary damage assessment. More and more UAV equipment with low-cost sensors must become, in the future, suitable in every situation of documentation, but above all in damages and uncertainty frameworks. Rapidity in acquisition times and low-cost sensors are challenging marks, and they could be taken into consideration maybe with time spending processing. The paper will analyze and try to classify the information content in 3D aerial and terrestrial models and the importance of metric and non-metric withdrawable information that should be suitable for further uses, as the structural analysis one. The test area is an experience of Team Direct from Politecnico di Torino in centre Italy, where a strong earthquake occurred in August 2016. This study is carried out on a stand-alone damaged building in Pescara del Tronto (AP), with a multi-sensor 3D survey. The aim is to evaluate the contribution of terrestrial and aerial quick documentation by a SLAM based LiDAR and a camera equipped multirotor UAV, for a first reconnaissance inspection and modelling in terms of level of details, metric and non-metric information.

  16. Spatio-temporal interpolation of soil moisture in 3D+T using automated sensor network data

    NASA Astrophysics Data System (ADS)

    Gasch, C.; Hengl, T.; Magney, T. S.; Brown, D. J.; Gräler, B.

    2014-12-01

    Soil sensor networks provide frequent in situ measurements of dynamic soil properties at fixed locations, producing data in 2- or 3-dimensions and through time (2D+T and 3D+T). Spatio-temporal interpolation of 3D+T point data produces continuous estimates that can then be used for prediction at unsampled times and locations, as input for process models, and can simply aid in visualization of properties through space and time. Regression-kriging with 3D and 2D+T data has successfully been implemented, but currently the field of geostatistics lacks an analytical framework for modeling 3D+T data. Our objective is to develop robust 3D+T models for mapping dynamic soil data that has been collected with high spatial and temporal resolution. For this analysis, we use data collected from a sensor network installed on the R.J. Cook Agronomy Farm (CAF), a 37-ha Long-Term Agro-Ecosystem Research (LTAR) site in Pullman, WA. For five years, the sensors have collected hourly measurements of soil volumetric water content at 42 locations and five depths. The CAF dataset also includes a digital elevation model and derivatives, a soil unit description map, crop rotations, electromagnetic induction surveys, daily meteorological data, and seasonal satellite imagery. The soil-water sensor data, combined with the spatial and temporal covariates, provide an ideal dataset for developing 3D+T models. The presentation will include preliminary results and address main implementation strategies.

  17. 3D flash lidar performance in flight testing on the Morpheus autonomous, rocket-propelled lander to a lunar-like hazard field

    NASA Astrophysics Data System (ADS)

    Roback, Vincent E.; Amzajerdian, Farzin; Bulyshev, Alexander E.; Brewster, Paul F.; Barnes, Bruce W.

    2016-05-01

    For the first time, a 3-D imaging Flash Lidar instrument has been used in flight to scan a lunar-like hazard field, build a 3-D Digital Elevation Map (DEM), identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The flight tests served as the TRL 6 demo of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) system and included launch from NASA-Kennedy, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range. The ALHAT project developed a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar is a second generation, compact, real-time, air-cooled instrument. Based upon extensive on-ground characterization at flight ranges, the Flash Lidar was shown to be capable of imaging hazards from a slant range of 1 km with an 8 cm range precision and a range accuracy better than 35 cm, both at 1-σ. The Flash Lidar identified landing hazards as small as 30 cm from the maximum slant range which Morpheus could achieve (450 m); however, under certain wind conditions it was susceptible to scintillation arising from air heated by the rocket engine and to pre-triggering on a dust cloud created during launch and transported down-range by wind.

  18. 3-D Flash Lidar Performance in Flight Testing on the Morpheus Autonomous, Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, Vincent E.; Amzajerdian, Farzin; Bulyshev, Alexander E.; Brewster, Paul F.; Barnes, Bruce W.

    2016-01-01

    For the first time, a 3-D imaging Flash Lidar instrument has been used in flight to scan a lunar-like hazard field, build a 3-D Digital Elevation Map (DEM), identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The flight tests served as the TRL 6 demo of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) system and included launch from NASA-Kennedy, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range. The ALHAT project developed a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar is a second generation, compact, real-time, air-cooled instrument. Based upon extensive on-ground characterization at flight ranges, the Flash Lidar was shown to be capable of imaging hazards from a slant range of 1 km with an 8 cm range precision and a range accuracy better than 35 cm, both at 1-delta. The Flash Lidar identified landing hazards as small as 30 cm from the maximum slant range which Morpheus could achieve (450 m); however, under certain wind conditions it was susceptible to scintillation arising from air heated by the rocket engine and to pre-triggering on a dust cloud created during launch and transported down-range by wind.

  19. 3D beam shape estimation based on distributed coaxial cable interferometric sensor

    NASA Astrophysics Data System (ADS)

    Cheng, Baokai; Zhu, Wenge; Liu, Jie; Yuan, Lei; Xiao, Hai

    2017-03-01

    We present a coaxial cable interferometer based distributed sensing system for 3D beam shape estimation. By making a series of reflectors on a coaxial cable, multiple Fabry–Perot cavities are created on it. Two cables are mounted on the beam at proper locations, and a vector network analyzer (VNA) is connected to them to obtain the complex reflection signal, which is used to calculate the strain distribution of the beam in horizontal and vertical planes. With 6 GHz swept bandwidth on the VNA, the spatial resolution for distributed strain measurement is 0.1 m, and the sensitivity is 3.768 MHz mε ‑1 at the interferogram dip near 3.3 GHz. Using displacement-strain transformation, the shape of the beam is reconstructed. With only two modified cables and a VNA, this system is easy to implement and manage. Comparing to optical fiber based sensor systems, the coaxial cable sensors have the advantage of large strain and robustness, making this system suitable for structure health monitoring applications.

  20. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  1. A 3D Faraday Shield for Interdigitated Dielectrometry Sensors and Its Effect on Capacitance.

    PubMed

    Risos, Alex; Long, Nicholas; Hunze, Arvid; Gouws, Gideon

    2016-12-31

    Interdigitated dielectrometry sensors (IDS) are capacitive sensors investigated to precisely measure the relative permittivity ( ϵ r ) of insulating liquids. Such liquids used in the power industry exhibit a change in ϵ r as they degrade. The IDS ability to measure ϵ r in-situ can potentially reduce maintenance, increase grid stability and improve safety. Noise from external electric field sources is a prominent issue with IDS. This paper investigates the novelty of applying a Faraday cage onto an IDS as a 3D shield to reduce this noise. This alters the spatially distributed electric field of an IDS affecting its sensing properties. Therefore, dependency of the sensor's signal with the distance to a shield above the IDS electrodes has been investigated experimentally and theoretically via a Green's function calculation and FEM. A criteria of the shield's distance s = s 0 has been defined as the distance which gives a capacitance for the IDS equal to 1 - e - 2 = 86.5 % of its unshielded value. Theoretical calculations using a simplified geometry gave a constant value for s 0 / λ = 1.65, where λ is the IDS wavelength. In the experiment, values for s 0 were found to be lower than predicted as from theory and the ratio s 0 / λ variable. This was analyzed in detail and it was found to be resulting from the specific spatial structure of the IDS. A subsequent measurement of a common insulating liquid with a nearby noise source demonstrates a considerable reduction in the standard deviation of the relative permittivity from σ unshielded = ± 9.5% to σ shielded = ± 0.6%. The presented findings enhance our understanding of IDS in respect to the influence of a Faraday shield on the capacitance, parasitic capacitances of the IDS and external noise impact on the measurement of ϵ r .

  2. Coherent Doppler Wind Lidar Development at NASA Langley Research Center for NASA Space-Based 3-D Winds Mission

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Kavaya, Michael J.; Yu, Jirong; Koch, Grady J.

    2012-01-01

    We review the 20-plus years of pulsed transmit laser development at NASA Langley Research Center (LaRC) to enable a coherent Doppler wind lidar to measure global winds from earth orbit. We briefly also discuss the many other ingredients needed to prepare for this space mission.

  3. A study of integration methods of aerial imagery and LIDAR data for a high level of automation in 3D building reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Suyoung; Schenk, Toni F.

    2003-04-01

    This paper describes integration methods to increase the level of automation in building reconstruction. Aerial imagery has been used as a major source in mapping fields and, in recent years, LIDAR data became popular as another type of mapping resources. Regarding to their performances, aerial imagery has abilities to delineate object boundaries but leaves many missing parts of boundaries during feature extraction. LIDAR data provide direct information about heights of object surfaces but have limitation for boundary localization. Efficient methods using complementary characteristics of two sensors are described to generate hypotheses of building boundaries and localize the object features. Tree structures for grid contours of LIDAR data are used for interpretation of contours. Buildings are recognized by analyzing the contour trees and modeled with surface patches with LIDAR data. Hypotheses of building models are generated as combination of wing models and verified by assessing the consistency between the corresponding data sets. Experiments using aerial imagery and laser data are presented. Our approach shows that the building boundaries are successfully recognized through our contour analysis approach and the inference from contours and our modeling method using wing model increase the level of automation in hypothesis generation/verification steps.

  4. Using a magnetite/thermoplastic composite in 3D printing of direct replacements for commercially available flow sensors

    NASA Astrophysics Data System (ADS)

    Leigh, S. J.; Purssell, C. P.; Billson, D. R.; Hutchins, D. A.

    2014-09-01

    Flow sensing is an essential technique required for a wide range of application environments ranging from liquid dispensing to utility monitoring. A number of different methodologies and deployment strategies have been devised to cover the diverse range of potential application areas. The ability to easily create new bespoke sensors for new applications is therefore of natural interest. Fused deposition modelling is a 3D printing technology based upon the fabrication of 3D structures in a layer-by-layer fashion using extruded strands of molten thermoplastic. The technology was developed in the late 1980s but has only recently come to more wide-scale attention outside of specialist applications and rapid prototyping due to the advent of low-cost 3D printing platforms such as the RepRap. Due to the relatively low-cost of the printers and feedstock materials, these printers are ideal candidates for wide-scale installation as localized manufacturing platforms to quickly produce replacement parts when components fail. One of the current limitations with the technology is the availability of functional printing materials to facilitate production of complex functional 3D objects and devices beyond mere concept prototypes. This paper presents the formulation of a simple magnetite nanoparticle-loaded thermoplastic composite and its incorporation into a 3D printed flow-sensor in order to mimic the function of a commercially available flow-sensing device. Using the multi-material printing capability of the 3D printer allows a much smaller amount of functional material to be used in comparison to the commercial flow sensor by only placing the material where it is specifically required. Analysis of the printed sensor also revealed a much more linear response to increasing flow rate of water showing that 3D printed devices have the potential to at least perform as well as a conventionally produced sensor.

  5. Practical issues in automatic 3D reconstruction and navigation applications using man-portable or vehicle-mounted sensors

    NASA Astrophysics Data System (ADS)

    Harris, Chris; Stennett, Carl

    2012-09-01

    The navigation of an autonomous robot vehicle and person localisation in the absence of GPS both rely on using local sensors to build a model of the 3D environment. Accomplishing such capabilities is not straightforward - there are many choices to be made of sensor and processing algorithms. Roke Manor Research has broad experience in this field, gained from building and characterising real-time systems that operate in the real world. This includes developing localization for planetary and indoor rovers, model building of indoor and outdoor environments, and most recently, the building of texture-mapped 3D surface models.

  6. A 3D Faraday Shield for Interdigitated Dielectrometry Sensors and Its Effect on Capacitance

    PubMed Central

    Risos, Alex; Long, Nicholas; Hunze, Arvid; Gouws, Gideon

    2016-01-01

    Interdigitated dielectrometry sensors (IDS) are capacitive sensors investigated to precisely measure the relative permittivity (ϵr) of insulating liquids. Such liquids used in the power industry exhibit a change in ϵr as they degrade. The IDS ability to measure ϵr in-situ can potentially reduce maintenance, increase grid stability and improve safety. Noise from external electric field sources is a prominent issue with IDS. This paper investigates the novelty of applying a Faraday cage onto an IDS as a 3D shield to reduce this noise. This alters the spatially distributed electric field of an IDS affecting its sensing properties. Therefore, dependency of the sensor’s signal with the distance to a shield above the IDS electrodes has been investigated experimentally and theoretically via a Green’s function calculation and FEM. A criteria of the shield’s distance s = s0 has been defined as the distance which gives a capacitance for the IDS equal to 1 − e−2=86.5% of its unshielded value. Theoretical calculations using a simplified geometry gave a constant value for s0/λ = 1.65, where λ is the IDS wavelength. In the experiment, values for s0 were found to be lower than predicted as from theory and the ratio s0/λ variable. This was analyzed in detail and it was found to be resulting from the specific spatial structure of the IDS. A subsequent measurement of a common insulating liquid with a nearby noise source demonstrates a considerable reduction in the standard deviation of the relative permittivity from σunshielded=±9.5% to σshielded=±0.6%. The presented findings enhance our understanding of IDS in respect to the influence of a Faraday shield on the capacitance, parasitic capacitances of the IDS and external noise impact on the measurement of ϵr. PMID:28042868

  7. 3D active edge silicon sensors: Device processing, yield and QA for the ATLAS-IBL production

    SciTech Connect

    Da Vià, Cinzia; Boscardil, Maurizio; Dalla Betta, GianFranco; Darbo, Giovanni; Fleta, Celeste; Gemme, Claudia; Giacomini, Gabriele; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Christopher; Kok, Angela; La Rosa, Alessandro; Micelli, Andrea; Parker, Sherwood; Pellegrini, Giulio; Pohl, David-Leon; Povoli, Marco; Vianello, Elisa; Zorzi, Nicola; Watts, S. J.

    2013-01-01

    3D silicon sensors, where plasma micromachining is used to etch deep narrow apertures in the silicon substrate to form electrodes of PIN junctions, were successfully manufactured in facilities in Europe and USA. In 2011 the technology underwent a qualification process to establish its maturity for a medium scale production for the construction of a pixel layer for vertex detection, the Insertable B-Layer (IBL) at the CERN-LHC ATLAS experiment. The IBL collaboration, following that recommendation from the review panel, decided to complete the production of planar and 3D sensors and endorsed the proposal to build enough modules for a mixed IBL sensor scenario where 25% of 3D modules populate the forward and backward part of each stave. The production of planar sensors will also allow coverage of 100% of the IBL, in case that option was required. This paper will describe the processing strategy which allowed successful 3D sensor production, some of the Quality Assurance (QA) tests performed during the pre-production phase and the production yield to date.

  8. Estimability of thrusting trajectories in 3-D from a single passive sensor with unknown launch point

    NASA Astrophysics Data System (ADS)

    Yuan, Ting; Bar-Shalom, Yaakov; Willett, Peter; Ben-Dov, R.; Pollak, S.

    2013-09-01

    The problem of estimating the state of thrusting/ballistic endoatmospheric projectiles moving in 3-dimensional (3-D) space using 2-dimensional (2-D) measurements from a single passive sensor is investigated. The location of projectile's launch point (LP) is unavailable and this could significantly affect the performance of the estimation and the IPP. The LP altitude is then an unknown target parameter. The estimability is analyzed based on the Fisher Information Matrix (FIM) of the target parameter vector, comprising the initial launch (azimuth and elevation) angles, drag coefficient, thrust and the LP altitude, which determine the trajectory according to a nonlinear motion equation. The full rank of the FIM ensures that one has an estimable target parameters. The corresponding Craḿer-Rao lower bound (CRLB) quantifies the estimation performance of the estimator that is statistically efficient and can be used for IPP. In view of the inherent nonlinearity of the problem, the maximum likelihood (ML) estimate of the target parameter vector is found by using a mixed (partially grid-based) search approach. For a selected grid in the drag-coefficient-thrust-altitude subspace, the proposed parallelizable approach is shown to have reliable estimation performance and further leads to the final IPP of high accuracy.

  9. Prediction of L-band signal attenuation in forests using 3D vegetation structure from airborne LiDAR

    NASA Astrophysics Data System (ADS)

    Liu, Pang-Wei; Lee, Heezin; Judge, Jasmeet; Wright, William C.; Clint Slatton, K.

    2011-09-01

    In this study, we propose a novel method to predict microwave attenuation in forested areas by using airborne Light Detection and Ranging (LiDAR). While propagating through a vegetative medium, microwave signals suffer from reflection, absorption, and scattering within vegetation, which cause signal attenuation and, consequently, deteriorate signal reception and information interpretation. A Fresnel zone enveloping the radio frequency line-of-sight is applied to segment vegetation structure occluding signal propagation. Return parameters and the spatial distribution of vegetation from the airborne LiDAR inside Fresnel zones are used to weight the laser points to estimate directional vegetation structure. A Directional Vegetation Density (DVD) model is developed through regression that links the vegetation structure to the signal attenuation at the L-band using GPS observations in a mixed forest in North Central Florida. The DVD model is compared with currently-used empirical models and obtained better R2 values of 0.54 than the slab-based models. Finally, the model is evaluated by comparing with GPS observations of signal attenuation. An overall root mean square error of 3.51 dB and a maximum absolute error of 9.38 dB are found. Sophisticated classification algorithms and full-waveform LiDAR systems may significantly improve the estimation of signal attenuation.

  10. Correlation between the respiratory waveform measured using a respiratory sensor and 3D tumor motion in gated radiotherapy

    SciTech Connect

    Tsunashima, Yoshikazu . E-mail: tsunashima@pmrc.tsukuba.ac.jp; Sakae, Takeji; Shioyama, Yoshiyuki; Kagei, Kenji; Terunuma, Toshiyuki; Nohtomi, Akihiro; Akine, Yasuyuki

    2004-11-01

    Purpose: The purpose of this study is to investigate the correlation between the respiratory waveform measured using a respiratory sensor and three-dimensional (3D) tumor motion. Methods and materials: A laser displacement sensor (LDS: KEYENCE LB-300) that measures distance using infrared light was used as the respiratory sensor. This was placed such that the focus was in an area around the patient's navel. When the distance from the LDS to the body surface changes as the patient breathes, the displacement is detected as a respiratory waveform. To obtain the 3D tumor motion, a biplane digital radiography unit was used. For the tumor in the lung, liver, and esophagus of 26 patients, the waveform was compared with the 3D tumor motion. The relationship between the respiratory waveform and the 3D tumor motion was analyzed by means of the Fourier transform and a cross-correlation function. Results: The respiratory waveform cycle agreed with that of the cranial-caudal and dorsal-ventral tumor motion. A phase shift observed between the respiratory waveform and the 3D tumor motion was principally in the range 0.0 to 0.3 s, regardless of the organ being measured, which means that the respiratory waveform does not always express the 3D tumor motion with fidelity. For this reason, the standard deviation of the tumor position in the expiration phase, as indicated by the respiratory waveform, was derived, which should be helpful in suggesting the internal margin required in the case of respiratory gated radiotherapy. Conclusion: Although obtained from only a few breathing cycles for each patient, the correlation between the respiratory waveform and the 3D tumor motion was evident in this study. If this relationship is analyzed carefully and an internal margin is applied, the accuracy and convenience of respiratory gated radiotherapy could be improved by use of the respiratory sensor.Thus, it is expected that this procedure will come into wider use.

  11. 3D radiative transfer effects in multi-angle/multispectral radio-polarimetric signals from a mixture of clouds and aerosols viewed by a non-imaging sensor

    NASA Astrophysics Data System (ADS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-09-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal—not noise—for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  12. Assessment of Iterative Closest Point Registration Accuracy for Different Phantom Surfaces Captured by an Optical 3D Sensor in Radiotherapy

    PubMed Central

    Walke, Mathias; Gademann, Günther

    2017-01-01

    An optical 3D sensor provides an additional tool for verification of correct patient settlement on a Tomotherapy treatment machine. The patient's position in the actual treatment is compared with the intended position defined in treatment planning. A commercially available optical 3D sensor measures parts of the body surface and estimates the deviation from the desired position without markers. The registration precision of the in-built algorithm and of selected ICP (iterative closest point) algorithms is investigated on surface data of specially designed phantoms captured by the optical 3D sensor for predefined shifts of the treatment table. A rigid body transform is compared with the actual displacement to check registration reliability for predefined limits. The curvature type of investigated phantom bodies has a strong influence on registration result which is more critical for surfaces of low curvature. We investigated the registration accuracy of the optical 3D sensor for the chosen phantoms and compared the results with selected unconstrained ICP algorithms. Safe registration within the clinical limits is only possible for uniquely shaped surface regions, but error metrics based on surface normals improve translational registration. Large registration errors clearly hint at setup deviations, whereas small values do not guarantee correct positioning. PMID:28163773

  13. Development of 3D carbon nanotube interdigitated finger electrodes on polymer substrate for flexible capacitive sensor application.

    PubMed

    Hu, Chih-Fan; Wang, Jhih-Yu; Liu, Yu-Chia; Tsai, Ming-Han; Fang, Weileun

    2013-11-08

    This study reports a novel approach to the implementation of 3D carbon nanotube (CNT) interdigitated finger electrodes on flexible polymer, and the detection of strain, bending curvature, tactile force and proximity distance are demonstrated. The merits of the presented CNT-based flexible sensor are as follows: (1) the silicon substrate is patterned to enable the formation of 3D vertically aligned CNTs on the substrate surface; (2) polymer molding on the silicon substrate with 3D CNTs is further employed to transfer the 3D CNTs to the flexible polymer substrate; (3) the CNT-polymer composite (~70 μm in height) is employed to form interdigitated finger electrodes to increase the sensing area and initial capacitance; (4) other structures such as electrical routings, resistors and mechanical supporters are also available using the CNT-polymer composite. The preliminary fabrication results demonstrate a flexible capacitive sensor with 50 μm high CNT interdigitated electrodes on a poly-dimethylsiloxane substrate. The tests show that the typical capacitance change is several dozens of fF and the gauge factor is in the range of 3.44-4.88 for strain and bending curvature measurement; the sensitivity of the tactile sensor is 1.11% N(-1); a proximity distance near 2 mm away from the sensor can be detected.

  14. Development of 3D carbon nanotube interdigitated finger electrodes on polymer substrate for flexible capacitive sensor application

    NASA Astrophysics Data System (ADS)

    Hu, Chih-Fan; Wang, Jhih-Yu; Liu, Yu-Chia; Tsai, Ming-Han; Fang, Weileun

    2013-11-01

    This study reports a novel approach to the implementation of 3D carbon nanotube (CNT) interdigitated finger electrodes on flexible polymer, and the detection of strain, bending curvature, tactile force and proximity distance are demonstrated. The merits of the presented CNT-based flexible sensor are as follows: (1) the silicon substrate is patterned to enable the formation of 3D vertically aligned CNTs on the substrate surface; (2) polymer molding on the silicon substrate with 3D CNTs is further employed to transfer the 3D CNTs to the flexible polymer substrate; (3) the CNT-polymer composite (˜70 μm in height) is employed to form interdigitated finger electrodes to increase the sensing area and initial capacitance; (4) other structures such as electrical routings, resistors and mechanical supporters are also available using the CNT-polymer composite. The preliminary fabrication results demonstrate a flexible capacitive sensor with 50 μm high CNT interdigitated electrodes on a poly-dimethylsiloxane substrate. The tests show that the typical capacitance change is several dozens of fF and the gauge factor is in the range of 3.44-4.88 for strain and bending curvature measurement; the sensitivity of the tactile sensor is 1.11% N-1 a proximity distance near 2 mm away from the sensor can be detected.

  15. Using LiDAR Data to Measure the 3D Green Biomass of Beijing Urban Forest in China

    PubMed Central

    He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu

    2013-01-01

    The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m3, of which coniferous was 28.7871 million m3 and broad-leaf was 370.3424 million m3. The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities. PMID:24146792

  16. Using LiDAR data to measure the 3D green biomass of Beijing urban forest in China.

    PubMed

    He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu

    2013-01-01

    The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m(3), of which coniferous was 28.7871 million m(3) and broad-leaf was 370.3424 million m(3). The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities.

  17. An Inspire-Konform 3d Building Model of Bavaria Using Cadastre Information, LIDAR and Image Matching

    NASA Astrophysics Data System (ADS)

    Roschlaub, R.; Batscheider, J.

    2016-06-01

    The federal governments of Germany endeavour to create a harmonized 3D building data set based on a common application schema (the AdV-CityGML-Profile). The Bavarian Agency for Digitisation, High-Speed Internet and Surveying has launched a statewide 3D Building Model with standardized roof shapes for all 8.1 million buildings in Bavaria. For the acquisition of the 3D Building Model LiDAR-data or data from Image Matching are used as basis in addition with the building ground plans of the official cadastral map. The data management of the 3D Building Model is carried out by a central database with the usage of a nationwide standardized CityGML-Profile of the AdV. The update of the 3D Building Model for new buildings is done by terrestrial building measurements within the maintenance process of the cadaster and from image matching. In a joint research project, the Bavarian State Agency for Surveying and Geoinformation and the TUM, Chair of Geoinformatics, transformed an AdV-CityGML-Profilebased test data set of Bavarian LoD2 building models into an INSPIRE-compliant schema. For the purpose of a transformation of such kind, the AdV provides a data specification, a test plan for 3D Building Models and a mapping table. The research project examined whether the transformation rules defined in the mapping table, were unambiguous and sufficient for implementing a transformation of LoD2 data based on the AdV-CityGML-Profile into the INSPIRE schema. The proof of concept was carried out by transforming production data of the Bavarian 3D Building Model in LoD2 into the INSPIRE BU schema. In order to assure the quality of the data to be transformed, the test specifications according to the test plan for 3D Building Models of the AdV were carried out. The AdV mapping table was checked for completeness and correctness and amendments were made accordingly.

  18. Parallel robot for micro assembly with integrated innovative optical 3D-sensor

    NASA Astrophysics Data System (ADS)

    Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer

    2002-10-01

    Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.

  19. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  20. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras.

    PubMed

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-11-18

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained.

  1. Concept for an airborne real-time ISR system with multi-sensor 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Haraké, Laura; Schilling, Hendrik; Blohm, Christian; Hillemann, Markus; Lenz, Andreas; Becker, Merlin; Keskin, Göksu; Middelmann, Wolfgang

    2016-10-01

    In modern aerial Intelligence, Surveillance and Reconnaissance operations, precise 3D information becomes inevitable for increased situation awareness. In particular, object geometries represented by texturized digital surface models constitute an alternative to a pure evaluation of radiometric measurements. Besides the 3D data's level of detail aspect, its availability is time-relevant in order to make quick decisions. Expanding the concept of our preceding remote sensing platform developed together with OHB System AG and Geosystems GmbH, in this paper we present an airborne multi-sensor system based on a motor glider equipped with two wing pods; one carries the sensors, whereas the second pod downlinks sensor data to a connected ground control station by using the Aerial Reconnaissance Data System of OHB. An uplink is created to receive remote commands from the manned mobile ground control station, which on its part processes and evaluates incoming sensor data. The system allows the integration of efficient image processing and machine learning algorithms. In this work, we introduce a near real-time approach for the acquisition of a texturized 3D data model with the help of an airborne laser scanner and four high-resolution multi-spectral (RGB, near-infrared) cameras. Image sequences from nadir and off-nadir cameras permit to generate dense point clouds and to texturize also facades of buildings. The ground control station distributes processed 3D data over a linked geoinformation system with web capabilities to off-site decision-makers. As the accurate acquisition of sensor data requires boresight calibrated sensors, we additionally examine the first steps of a camera calibration workflow.

  2. 3D sensor placement strategy using the full-range pheromone ant colony system

    NASA Astrophysics Data System (ADS)

    Shuo, Feng; Jingqing, Jia

    2016-07-01

    An optimized sensor placement strategy will be extremely beneficial to ensure the safety and cost reduction considerations of structural health monitoring (SHM) systems. The sensors must be placed such that important dynamic information is obtained and the number of sensors is minimized. The practice is to select individual sensor directions by several 1D sensor methods and the triaxial sensors are placed in these directions for monitoring. However, this may lead to non-optimal placement of many triaxial sensors. In this paper, a new method, called FRPACS, is proposed based on the ant colony system (ACS) to solve the optimal placement of triaxial sensors. The triaxial sensors are placed as single units in an optimal fashion. And then the new method is compared with other algorithms using Dalian North Bridge. The computational precision and iteration efficiency of the FRPACS has been greatly improved compared with the original ACS and EFI method.

  3. Retrieving Leaf Area Index and Foliage Profiles Through Voxelized 3-D Forest Reconstruction Using Terrestrial Full-Waveform and Dual-Wavelength Echidna Lidars

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yang, X.; Li, Z.; Schaaf, C.; Wang, Z.; Yao, T.; Zhao, F.; Saenz, E.; Paynter, I.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Martel, J.; Howe, G.; Hewawasam, K.; Jupp, D.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Measuring and monitoring canopy biophysical parameters provide a baseline for carbon flux studies related to deforestation and disturbance in forest ecosystems. Terrestrial full-waveform lidar systems, such as the Echidna Validation Instrument (EVI) and its successor Dual-Wavelength Echidna Lidar (DWEL), offer rapid, accurate, and automated characterization of forest structure. In this study, we apply a methodology based on voxelized 3-D forest reconstructions built from EVI and DWEL scans to directly estimate two important biophysical parameters: Leaf Area Index (LAI) and foliage profile. Gap probability, apparent reflectance, and volume associated with the laser pulse footprint at the observed range are assigned to the foliage scattering events in the reconstructed point cloud. Leaf angle distribution is accommodated with a simple model based on gap probability with zenith angle as observed in individual scans of the stand. The DWEL instrument, which emits simultaneous laser pulses at 1064 nm and 1548 nm wavelengths, provides a better capability to separate trunk and branch hits from foliage hits due to water absorption by leaf cellular contents at 1548 nm band. We generate voxel datasets of foliage points using a classification methodology solely based on pulse shape for scans collected by EVI and with pulse shape and band ratio for scans collected by DWEL. We then compare the LAIs and foliage profiles retrieved from the voxel datasets of the two instruments at the same red fir site in Sierra National Forest, CA, with each other and with observations from airborne and field measurements. This study further tests the voxelization methodology in obtaining LAI and foliage profiles that are largely free of clumping effects and returns from woody materials in the canopy. These retrievals can provide a valuable 'ground-truth' validation data source for large-footprint spaceborne or airborne lidar systems retrievals.

  4. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm.

  5. Development of the crone seedlings handling system using 3D-sensor and force control gripper

    NASA Astrophysics Data System (ADS)

    Hojo, Hirotaka; Takarada, Hiroshi; Hiroyasu, Takahisa; Hata, Seiji

    2005-12-01

    The crone seedlings have unstable form and it is hard to handle. In order to transplant crone seedlings automatically, the functions of 3D-shape recognition and force control of grippers are indispensable. We have introduced the new handling technology which combines the 3D-mesurement using the relative stereo method and gripping method by gripping stroke control for high elasticity forceps structure. In this gripping method, the gripping force is controlled according to the shoot diameter which is measured by 3d-mesurment of relative stereo method. The experimental crone seedlings transplant system using the new handling technique has been shown.

  6. Diborane Electrode Response in 3D Silicon Sensors for the CMS and ATLAS Experiments

    SciTech Connect

    Brown, Emily R.; /Reed Coll. /SLAC

    2011-06-22

    Unusually high leakage currents have been measured in test wafers produced by the manufacturer SINTEF containing 3D pixel silicon sensor chips designed for the ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) experiments. Previous data has shown the CMS chips as having a lower leakage current after processing than ATLAS chips. Some theories behind the cause of the leakage currents include the dicing process and the usage of copper in bump bonding, and with differences in packaging and handling between the ATLAS and CMS chips causing the disparity between the two. Data taken at SLAC from a SINTEF wafer with electrodes doped with diborane and filled with polysilicon, before dicing, and with indium bumps added contradicts this past data, as ATLAS chips showed a lower leakage current than CMS chips. It also argues against copper in bump bonding and the dicing process as main causes of leakage current as neither were involved on this wafer. However, they still display an extremely high leakage current, with the source mostly unknown. The SINTEF wafer shows completely different behavior than the others, as the FEI3s actually performed better than the CMS chips. Therefore this data argues against the differences in packaging and handling or the intrinsic geometry of the two as a cause in the disparity between the leakage currents of the chips. Even though the leakage current in the FEI3s overall is lower, the current is still significant enough to cause problems. As this wafer was not diced, nor had it any copper added for bump bonding, this data argues against the dicing and bump bonding as causes for leakage current. To compliment this information, more data will be taken on the efficiency of the individual electrodes of the ATLAS and CMS chips on this wafer. The electrodes will be shot perpendicularly with a laser to test the efficiency across the width of the electrode. A mask with pinholes has been made to focus the laser to a beam smaller than the

  7. A nano-microstructured artificial-hair-cell-type sensor based on topologically graded 3D carbon nanotube bundles

    NASA Astrophysics Data System (ADS)

    Yilmazoglu, O.; Yadav, S.; Cicek, D.; Schneider, J. J.

    2016-09-01

    A design for a unique artificial-hair-cell-type sensor (AHCTS) based entirely on 3D-structured, vertically aligned carbon nanotube (CNT) bundles is introduced. Standard microfabrication techniques were used for the straightforward micro-nano integration of vertically aligned carbon nanotube arrays composed of low-layer multi-walled CNTs (two to six layers). The mechanical properties of the carbon nanotube bundles were intensively characterized with regard to various substrates and CNT morphology, e.g. bundle height. The CNT bundles display excellent flexibility and mechanical stability for lateral bending, showing high tear resistance. The integrated 3D CNT sensor can detect three-dimensional forces using the deflection or compression of a central CNT bundle which changes the contact resistance to the shorter neighboring bundles. The complete sensor system can be fabricated using a single chemical vapor deposition (CVD) process step. Moreover, sophisticated external contacts to the surroundings are not necessary for signal detection. No additional sensors or external bias for signal detection are required. This simplifies the miniaturization and the integration of these nanostructures for future microsystem set-ups. The new nanostructured sensor system exhibits an average sensitivity of 2100 ppm in the linear regime with the relative resistance change per micron (ppm μm-1) of the individual CNT bundle tip deflection. Furthermore, experiments have shown highly sensitive piezoresistive behavior with an electrical resistance decrease of up to ˜11% at 50 μm mechanical deflection. The detection sensitivity is as low as 1 μm of deflection, and thus highly comparable with the tactile hair sensors of insects, having typical thresholds on the order of 30-50 μm. The AHCTS can easily be adapted and applied as a flow, tactile or acceleration sensor as well as a vibration sensor. Potential applications of the latter might come up in artificial cochlear systems. In

  8. Moving past normal force: capturing and classifying shear motion using 3D sensors.

    PubMed

    Kwan, Calvin; Salud, Lawrence; Ononye, Chiagozie; Zhao, Shenshen; Pugh, Carla

    2012-01-01

    In our previous research, we used clinical breast examination models instrumented with direct (normal) force sensors for training and assessment. A weakness of the normal force sensors is the ability to delineate, in detail, all of the performance measures we wish to understand. This study incorporated the use of newly developed shear force sensors to extend a framework for quantifying hands-on performance.

  9. Multipath estimation in urban environments from joint GNSS receivers and LiDAR sensors.

    PubMed

    Ali, Khurram; Chen, Xin; Dovis, Fabio; De Castro, David; Fernández, Antonio J

    2012-10-30

    In this paper, multipath error on Global Navigation Satellite System (GNSS) signals in urban environments is characterized with the help of Light Detection and Ranging (LiDAR) measurements. For this purpose, LiDAR equipment and Global Positioning System (GPS) receiver implementing a multipath estimating architecture were used to collect data in an urban environment. This paper demonstrates how GPS and LiDAR measurements can be jointly used to model the environment and obtain robust receivers. Multipath amplitude and delay are estimated by means of LiDAR feature extraction and multipath mitigation architecture. The results show the feasibility of integrating the information provided by LiDAR sensors and GNSS receivers for multipath mitigation.

  10. Multipath Estimation in Urban Environments from Joint GNSS Receivers and LiDAR Sensors

    PubMed Central

    Ali, Khurram; Chen, Xin; Dovis, Fabio; De Castro, David; Fernández, Antonio J.

    2012-01-01

    In this paper, multipath error on Global Navigation Satellite System (GNSS) signals in urban environments is characterized with the help of Light Detection and Ranging (LiDAR) measurements. For this purpose, LiDAR equipment and Global Positioning System (GPS) receiver implementing a multipath estimating architecture were used to collect data in an urban environment. This paper demonstrates how GPS and LiDAR measurements can be jointly used to model the environment and obtain robust receivers. Multipath amplitude and delay are estimated by means of LiDAR feature extraction and multipath mitigation architecture. The results show the feasibility of integrating the information provided by LiDAR sensors and GNSS receivers for multipath mitigation. PMID:23202177

  11. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  12. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  13. Development of lidar sensor for cloud-based measurements during convective conditions

    NASA Astrophysics Data System (ADS)

    Vishnu, R.; Bhavani Kumar, Y.; Rao, T. Narayana; Nair, Anish Kumar M.; Jayaraman, A.

    2016-05-01

    Atmospheric convection is a natural phenomena associated with heat transport. Convection is strong during daylight periods and rigorous in summer months. Severe ground heating associated with strong winds experienced during these periods. Tropics are considered as the source regions for strong convection. Formation of thunder storm clouds is common during this period. Location of cloud base and its associated dynamics is important to understand the influence of convection on the atmosphere. Lidars are sensitive to Mie scattering and are the suitable instruments for locating clouds in the atmosphere than instruments utilizing the radio frequency spectrum. Thunder storm clouds are composed of hydrometers and strongly scatter the laser light. Recently, a lidar technique was developed at National Atmospheric Research Laboratory (NARL), a Department of Space (DOS) unit, located at Gadanki near Tirupati. The lidar technique employs slant path operation and provides high resolution measurements on cloud base location in real-time. The laser based remote sensing technique allows measurement of atmosphere for every second at 7.5 m range resolution. The high resolution data permits assessment of updrafts at the cloud base. The lidar also provides real-time convective boundary layer height using aerosols as the tracers of atmospheric dynamics. The developed lidar sensor is planned for up-gradation with scanning facility to understand the cloud dynamics in the spatial direction. In this presentation, we present the lidar sensor technology and utilization of its technology for high resolution cloud base measurements during convective conditions over lidar site, Gadanki.

  14. 3D Lasers Increase Efficiency, Safety of Moving Machines

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.

  15. High-Precision 3D Geolocation of Persistent Scatterers with one Single-Epoch GCP and Lidar DSM Data

    NASA Astrophysics Data System (ADS)

    Yang, Mengshi; Dheenathayalan, Prabu; Chang, Ling; Wang, Jinhu; Lindenbergh, Roderik R. C.; Liao, Mingsheng; Hanssen, Ramon F.

    2016-08-01

    In persistent scatterer (PS) interferometry, the relatively poor 3D geolocalization precision of the measurement points (the scatterers) is still a major concern. It makes it difficult to attribute the deformation measurements unambiguously to (elements of) physical objects. Ground control points (GCP's), such as corner reflectors or transponders, can be used to improve geolocalization, but only in the range-azimuth domain. Here, we present a method which uses only one GCP, visible in only one single radar acquisition, in combination with a digital surface model (DSM) data to improve the geolocation precision, and to achieve an object snap by projecting the scatterer position to the intersection with the DSM model, in the metric defined by the covariance matrix (i.e. error ellipsoid) of every scatterer.

  16. A novel method for assessing the 3-D orientation accuracy of inertial/magnetic sensors.

    PubMed

    Faber, Gert S; Chang, Chien-Chi; Rizun, Peter; Dennerlein, Jack T

    2013-10-18

    A novel method for assessing the accuracy of inertial/magnetic sensors is presented. The method, referred to as the "residual matrix" method, is advantageous because it decouples the sensor's error with respect to Earth's gravity vector (attitude residual error: pitch and roll) from the sensor's error with respect to magnetic north (heading residual error), while remaining insensitive to singularity problems when the second Euler rotation is close to ±90°. As a demonstration, the accuracy of an inertial/magnetic sensor mounted to a participant's forearm was evaluated during a reaching task in a laboratory. Sensor orientation was measured internally (by the inertial/magnetic sensor) and externally using an optoelectronic measurement system with a marker cluster rigidly attached to the sensor's enclosure. Roll, pitch and heading residuals were calculated using the proposed novel method, as well as using a common orientation assessment method where the residuals are defined as the difference between the Euler angles measured by the inertial sensor and those measured by the optoelectronic system. Using the proposed residual matrix method, the roll and pitch residuals remained less than 1° and, as expected, no statistically significant difference between these two measures of attitude accuracy was found; the heading residuals were significantly larger than the attitude residuals but remained below 2°. Using the direct Euler angle comparison method, the residuals were in general larger due to singularity issues, and the expected significant difference between inertial/magnetic sensor attitude and heading accuracy was not present.

  17. Optimized design of a LED-array-based TOF range imaging sensor for fast 3-D shape measurement

    NASA Astrophysics Data System (ADS)

    Wang, Huanqin; Wang, Ying; Xu, Jun; He, Deyong; Zhao, Tianpeng; Ming, Hai; Kong, Deyi

    2011-06-01

    A LED-array-based range imaging sensor using Time-of-Flight (TOF) distance measurement was developed to capture the depth information of three-dimensional (3-D) object. By time-division electronic scanning of the LED heterodyne phase-shift TOF range finders in array, the range images were fast obtained without any mechanical moving parts. The design of LED-array-based range imaging sensor was adequately described and a range imaging theoretical model based on photoelectric signal processing was built, which showed there was mutual restriction relationship among the measurement time of a depth pixel, the bandwidth of receiver and the sensor's signal-to-noise ratio (SNR). In order to improve the key parameters of sensor such as range resolution and measurement speed simultaneously, some optimized designs needed to be done for the proposed range imaging sensor, including choosing proper parameters for the filters in receiver, adopting special structure feedback automatic gain control (AGC) circuit with short response time, etc. The final experiment results showed the sensor after optimization could acquire the range images at a rate of 10 frames per second with a range resolution as high as +/-2mm in the range of 50-1200mm. The essential advantages of the proposed range imaging sensor were construction with simple structure, high range resolution, short measurement time and low cost, which was sufficient for many robotic and industrial automation applications.

  18. Development of 3D Force Sensors for Nanopositioning and Nanomeasuring Machine

    PubMed Central

    Tibrewala, Arti; Hofmann, Norbert; Phataralaoha, Anurak; Jäger, Gerd; Büttgenbach, Stephanus

    2009-01-01

    In this contribution, we report on different miniaturized bulk micro machined three-axes piezoresistive force sensors for nanopositioning and nanomeasuring machine (NPMM). Various boss membrane structures, such as one boss full/cross, five boss full/cross and swastika membranes, were used as a basic structure for the force sensors. All designs have 16 p-type diffused piezoresistors on the surface of the membrane. Sensitivities in x, y and z directions are measured. Simulated and measured stiffness ratio in horizontal to vertical direction is measured for each design. Effect of the length of the stylus on H:V stiffness ratio is studied. Minimum and maximum deflection and resonance frequency are measured for all designs. The sensors were placed in a nanopositioning and nanomeasuring machine and one point measurements were performed for all the designs. Lastly the application of the sensor is shown, where dimension of a cube is measured using the sensor. PMID:22412308

  19. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  20. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  1. A model and simulation to predict the performance of angle-angle-range 3D flash ladar imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2004-11-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  2. A model and simulation to predict the performance of angle-angle-range 3D flash LADAR imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2005-10-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  3. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    PubMed Central

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2016-01-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrate an approach for performance enhancement that exploits the high level of similitude and redundancy contained on the multidimensional information measured by distributed fibre sensors. Exploiting conventional image and video processing, an unprecedented boost in signal-to-noise ratio and measurement contrast is experimentally demonstrated. The method can be applied to any white-noise-limited distributed fibre sensor and can remarkably provide a 100-fold improvement in the sensor performance with no hardware modification. PMID:26927698

  4. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  5. 3D-calibration of three- and four-sensor hot-film probes based on collocated sonic using neural networks

    NASA Astrophysics Data System (ADS)

    Kit, Eliezer; Liberzon, Dan

    2016-09-01

    High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104-10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of motorized

  6. Self-Assembly of 3-D Multifunctional Ceramic Composites for Photonics and Sensors

    DTIC Science & Technology

    2011-05-02

    discoveries supported by our earlier MURI work. Specific applications include new IR photonic crystal based filters for multispectral imaging, optical ...Johnson, A. J. Baca, J. A. Rogers, J. A. Lewis and P. V. Braun: Multidimensional Architectures for Functional Optical Devices, Advanced Materials, 22...P.V. Braun, “Templated Growth of and Optical Emission from Single Crystal GaAs 3D Photonic Crystals,” SPIE Photonics West, January 2010

  7. DVE flight test results of a sensor enhanced 3D conformal pilot support system

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick

    2015-06-01

    The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.

  8. Lidar Sensor Performance in Closed-Loop Flight Testing of the Morpheus Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, Vincent E.; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.; Bulyshev, Alexander E.

    2015-01-01

    For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, guide the Morpheus autonomous, rocket-propelled, free-flying test bed to a safe landing on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging flash lidar is a second generation, compact, real-time, air-cooled instrument developed from a number of cutting-edge components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The flash lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision at 1 sigma. The flash lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Doppler Lidar system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The Doppler Lidar's measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter, also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the flash lidar, can provide range along a separate vector. The Laser Altimeter measurements are also

  9. Separating Leaves from Trunks and Branches with Dual-Wavelength Terrestrial Lidar Scanning: Improving Canopy Structure Characterization in 3-D Space

    NASA Astrophysics Data System (ADS)

    Li, Z.; Strahler, A. H.; Schaaf, C.; Howe, G.; Martel, J.; Hewawasam, K.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Paynter, I.; Saenz, E.; Wang, Z.; Yang, X.; Yao, T.; Zhao, F.; Woodcock, C.; Jupp, D.; Schaefer, M.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Leaf area index (LAI) is an important parameter characterizing forest structure, used in models regulating the exchange of carbon, water and energy between the land and the atmosphere. However, optical methods in common use cannot separate leaf area from the area of upper trunks and branches, and thus retrieve only plant area index (PAI), which is adjusted to LAI using an appropriate empirical woody-to-total index. An additional problem is that the angular distributions of leaf normals and normals to woody surfaces are quite different, and thus leafy and woody components project quite different areas with varying zenith angle of view. This effect also causes error in LAI retrieval using optical methods. Full-waveform scans at both the NIR (1064 nm) and SWIR (1548 nm) wavelengths from the new terrestrial Lidar, the Dual-Wavelength Echidna Lidar (DWEL), which pulses in both wavelengths simultaneously, easily separate returns of leaves from trunks and branches in 3-D space. In DWEL scans collected at two different forest sites, Sierra National Forest in June 2013 and Brisbane Karawatha Forest Park in July 2013, the power returned from leaves is similar to power returned from trunks/branches at the NIR wavelength, whereas the power returned from leaves is much lower (only about half as large) at the SWIR wavelength. At the SWIR wavelength, the leaf scattering is strongly attenuated by liquid water absorption. Normalized difference index (NDI) images from the waveform mean intensity at the two wavelengths demonstrate a clear contrast between leaves and trunks/branches. The attached image shows NDI from a part of a scan of an open red fir stand in the Sierra National Forest. Leaves appear light, while other objects are darker.Dual-wavelength point clouds generated from the full waveform data show weaker returns from leaves than from trunks/branches. A simple threshold classification of the NDI value of each scattering point readily separates leaves from trunks and

  10. Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor

    PubMed Central

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolà, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-01-01

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying. PMID:24172283

  11. Discriminating crop, weeds and soil surface with a terrestrial LIDAR sensor.

    PubMed

    Andújar, Dionisio; Rueda-Ayala, Victor; Moreno, Hugo; Rosell-Polo, Joan Ramón; Escolá, Alexandre; Valero, Constantino; Gerhards, Roland; Fernández-Quintanilla, César; Dorado, José; Griepentrog, Hans-Werner

    2013-10-29

    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12-14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying.

  12. A Robust MEMS Based Multi-Component Sensor for 3D Borehole Seismic Arrays

    SciTech Connect

    Paulsson Geophysical Services

    2008-03-31

    The objective of this project was to develop, prototype and test a robust multi-component sensor that combines both Fiber Optic and MEMS technology for use in a borehole seismic array. The use such FOMEMS based sensors allows a dramatic increase in the number of sensors that can be deployed simultaneously in a borehole seismic array. Therefore, denser sampling of the seismic wave field can be afforded, which in turn allows us to efficiently and adequately sample P-wave as well as S-wave for high-resolution imaging purposes. Design, packaging and integration of the multi-component sensors and deployment system will target maximum operating temperature of 350-400 F and a maximum pressure of 15000-25000 psi, thus allowing operation under conditions encountered in deep gas reservoirs. This project aimed at using existing pieces of deployment technology as well as MEMS and fiber-optic technology. A sensor design and analysis study has been carried out and a laboratory prototype of an interrogator for a robust borehole seismic array system has been assembled and validated.

  13. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  14. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  15. Integrating Dynamic Data and Sensors with Semantic 3D City Models in the Context of Smart Cities

    NASA Astrophysics Data System (ADS)

    Chaturvedi, K.; Kolbe, T. H.

    2016-10-01

    Smart cities provide effective integration of human, physical and digital systems operating in the built environment. The advancements in city and landscape models, sensor web technologies, and simulation methods play a significant role in city analyses and improving quality of life of citizens and governance of cities. Semantic 3D city models can provide substantial benefits and can become a central information backbone for smart city infrastructures. However, current generation semantic 3D city models are static in nature and do not support dynamic properties and sensor observations. In this paper, we propose a new concept called Dynamizer allowing to represent highly dynamic data and providing a method for injecting dynamic variations of city object properties into the static representation. The approach also provides direct capability to model complex patterns based on statistics and general rules and also, real-time sensor observations. The concept is implemented as an Application Domain Extension for the CityGML standard. However, it could also be applied to other GML-based application schemas including the European INSPIRE data themes and national standards for topography and cadasters like the British Ordnance Survey Mastermap or the German cadaster standard ALKIS.

  16. Use of a Terrestrial LIDAR Sensor for Drift Detection in Vineyard Spraying

    PubMed Central

    Gil, Emilio; Llorens, Jordi; Llop, Jordi; Fàbregas, Xavier; Gallart, Montserrat

    2013-01-01

    The use of a scanning Light Detection and Ranging (LIDAR) system to characterize drift during pesticide application is described. The LIDAR system is compared with an ad hoc test bench used to quantify the amount of spray liquid moving beyond the canopy. Two sprayers were used during the field test; a conventional mist blower at two air flow rates (27,507 and 34,959 m3·h−1) equipped with two different nozzle types (conventional and air injection) and a multi row sprayer with individually oriented air outlets. A simple model based on a linear function was used to predict spray deposit using LIDAR measurements and to compare with the deposits measured over the test bench. Results showed differences in the effectiveness of the LIDAR sensor depending on the sprayed droplet size (nozzle type) and air intensity. For conventional mist blower and low air flow rate; the sensor detects a greater number of drift drops obtaining a better correlation (r = 0.91; p < 0.01) than for the case of coarse droplets or high air flow rate. In the case of the multi row sprayer; drift deposition in the test bench was very poor. In general; the use of the LIDAR sensor presents an interesting and easy technique to establish the potential drift of a specific spray situation as an adequate alternative for the evaluation of drift potential. PMID:23282583

  17. 3D measurements in conventional X-ray imaging with RGB-D sensors.

    PubMed

    Albiol, Francisco; Corbi, Alberto; Albiol, Alberto

    2017-04-01

    A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and X-ray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body.

  18. 3D-information fusion from very high resolution satellite sensors

    NASA Astrophysics Data System (ADS)

    Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.

    2015-04-01

    In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.

  19. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  20. Numerical analysis of a 3D optical sensor based on single mode fiber to multimode interference graphene design

    NASA Astrophysics Data System (ADS)

    Mutter, Kussay N.; Jafri, Zubir M.; Tan, Kok Chooi

    2016-04-01

    In this paper, the simulation and design of a waveguide for water turbidity sensing are presented. The structure of the proposed sensor uses a 2x2 array of multimode interference (MMI) coupler based on micro graphene waveguide for high sensitivity. The beam propagation method (BPM) are used to efficiently design the sensor structure. The structure is consist of an array of two by two elements of sensors. Each element has three sections of single mode for field input tapered to MMI as the main core sensor without cladding which is graphene based material, and then a single mode fiber as an output. In this configuration MMI responses to any change in the environment. We validate and present the results by implementing the design on a set of sucrose solution and showing how these samples lead to a sensitivity change in the sensor based on the MMI structures. Overall results, the 3D design has a feasible and effective sensing by drawing topographical distribution of suspended particles in the water.

  1. Amplitude-modulated laser range-finder for 3D imaging with multi-sensor data integration capabilities

    NASA Astrophysics Data System (ADS)

    Bartolini, L.; Ferri de Collibus, M.; Fornetti, G.; Guarneri, M.; Paglia, E.; Poggi, C.; Ricci, R.

    2005-06-01

    A high performance Amplitude Modulated Laser Rangefinder (AM-LR) is presented, aimed at accurately reconstructing 3D digital models of real targets, either single objects or complex scenes. The scanning system enables to sweep the sounding beam either linearly across the object or circularly around it, by placing the object on a controlled rotating platform. Both phase shift and amplitude of the modulating wave of back-scattered light are collected and processed, resulting respectively in an accurate range image and a shade-free, high resolution, photographic-like intensity image. The best performances obtained in terms of range resolution are ~100 μm. Resolution itself can be made to depend mainly on the laser modulation frequency, provided that the power of the backscattered light reaching the detector is at least a few nW. 3D models are reconstructed from sampled points by using specifically developed software tools, optimized so as to take advantage of the system peculiarities. Special procedures have also been implemented to perform precise matching of data acquired independently with different sensors (LIF laser sensors, thermographic cameras, etc.) onto the 3D models generated using the AM-LR. The system has been used to scan different types of real surfaces (stone, wood, alloys, bones) and ca be applied in various fields, ranging from industrial machining to medical diagnostics, vision in hostile environments cultural heritage conservation and restoration. The relevance of this technology in cultural heritage applications is discussed in special detail, by providing results obtained in different campaigns with an emphasis on the system's multi-sensor data integration capabilities.

  2. A model and simulation to predict 3D imaging LADAR sensor systems performance in real-world type environments

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Russo, Leonard E.

    2006-08-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. Accurate methods to model and simulate performance from 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation here is developed expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) detector noise figure; 4) detector gain; 5) target attributes; 6) atmospheric transmission; 7) atmospheric backscatter; 8) atmospheric turbulence; 9) obscurants; 10) obscurant path length, and; 11) platform motion. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel. Here, noise sources and gain are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel for the entire array. Model outputs are 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array.

  3. Development of Lidar Sensor Systems for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierottet, Diego F.; Petway, Larry B.; Vanek, Michael D.

    2010-01-01

    Lidar has been identified by NASA as a key technology for enabling autonomous safe landing of future robotic and crewed lunar landing vehicles. NASA LaRC has been developing three laser/lidar sensor systems under the ALHAT project. The capabilities of these Lidar sensor systems were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard helicopters and a fixed wing aircraft. The airborne tests were performed over Moon-like terrain in the California and Nevada deserts. These tests provided the necessary data for the development of signal processing software, and algorithms for hazard detection and navigation. The tests helped identify technology areas needing improvement and will also help guide future technology advancement activities.

  4. Reducing the influence of direct reflection on return signal detection in a 3D imaging lidar system by rotating the polarizing beam splitter.

    PubMed

    Wang, Chunhui; Lee, Xiaobao; Cui, Tianxiang; Qu, Yang; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-03-01

    The direction rule of the laser beam traveling through a deflected polarizing beam splitter (PBS) cube is derived. It reveals that, due to the influence of end-face reflection of the PBS at the detector side, the emergent beam coming from the incident beam parallels the direction of the original case without rotation, with only a very small translation interval between them. The formula of the translation interval is also given. Meanwhile, the emergent beam from the return signal at the detector side deflects at an angle twice that of the PBS rotation angle. The correctness has been verified by an experiment. The intensity transmittance of the emergent beam when propagating in the PBS is changes very little if the rotation angle is less than 35 deg. In a 3D imaging lidar system, by rotating the PBS cube by an angle, the direction of the return signal optical axis is separated from that of the origin, which can decrease or eliminate the influence of direct reflection caused by the prism end face on target return signal detection. This has been checked by experiment.

  5. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  6. Enhancing the sensitivity of magnetic sensors by 3D metamaterial shells

    PubMed Central

    Navau, Carles; Mach-Batlle, Rosa; Parra, Albert; Prat-Camps, Jordi; Laut, Sergi; Del-Valle, Nuria; Sanchez, Alvaro

    2017-01-01

    Magnetic sensors are key elements in our interconnected smart society. Their sensitivity becomes essential for many applications in fields such as biomedicine, computer memories, geophysics, or space exploration. Here we present a universal way of increasing the sensitivity of magnetic sensors by surrounding them with a spherical metamaterial shell with specially designed anisotropic magnetic properties. We analytically demonstrate that the magnetic field in the sensing area is enhanced by our metamaterial shell by a known factor that depends on the shell radii ratio. When the applied field is non-uniform, as for dipolar magnetic field sources, field gradient is increased as well. A proof-of-concept experimental realization confirms the theoretical predictions. The metamaterial shell is also shown to concentrate time-dependent magnetic fields upto frequencies of 100 kHz. PMID:28303951

  7. A method of improving the dynamic response of 3D force/torque sensors

    NASA Astrophysics Data System (ADS)

    Osypiuk, Rafał; Piskorowski, Jacek; Kubus, Daniel

    2016-02-01

    In the paper attention is drawn to adverse dynamic properties of filters implemented in commercial measurement systems, force/torque sensors, which are increasingly used in industrial robotics. To remedy the problem, it has been proposed to employ a time-variant filter with appropriately modulated parameters, owing to which it is possible to suppress the amplitude of the transient response and, at the same time, to increase the pulsation of damped oscillations; this results in the improvement of dynamic properties in terms of reducing the duration of transients. This property plays a key role in force control and in the fundamental problem of the robot establishing contact with rigid environment. The parametric filters have been verified experimentally and compared with filters available for force/torque sensors manufactured by JR3. The obtained results clearly indicate the advantages of the proposed solution, which may be an interesting alternative to the classic methods of filtration.

  8. Enhancing the sensitivity of magnetic sensors by 3D metamaterial shells.

    PubMed

    Navau, Carles; Mach-Batlle, Rosa; Parra, Albert; Prat-Camps, Jordi; Laut, Sergi; Del-Valle, Nuria; Sanchez, Alvaro

    2017-03-17

    Magnetic sensors are key elements in our interconnected smart society. Their sensitivity becomes essential for many applications in fields such as biomedicine, computer memories, geophysics, or space exploration. Here we present a universal way of increasing the sensitivity of magnetic sensors by surrounding them with a spherical metamaterial shell with specially designed anisotropic magnetic properties. We analytically demonstrate that the magnetic field in the sensing area is enhanced by our metamaterial shell by a known factor that depends on the shell radii ratio. When the applied field is non-uniform, as for dipolar magnetic field sources, field gradient is increased as well. A proof-of-concept experimental realization confirms the theoretical predictions. The metamaterial shell is also shown to concentrate time-dependent magnetic fields upto frequencies of 100 kHz.

  9. Enhancing the sensitivity of magnetic sensors by 3D metamaterial shells

    NASA Astrophysics Data System (ADS)

    Navau, Carles; Mach-Batlle, Rosa; Parra, Albert; Prat-Camps, Jordi; Laut, Sergi; Del-Valle, Nuria; Sanchez, Alvaro

    2017-03-01

    Magnetic sensors are key elements in our interconnected smart society. Their sensitivity becomes essential for many applications in fields such as biomedicine, computer memories, geophysics, or space exploration. Here we present a universal way of increasing the sensitivity of magnetic sensors by surrounding them with a spherical metamaterial shell with specially designed anisotropic magnetic properties. We analytically demonstrate that the magnetic field in the sensing area is enhanced by our metamaterial shell by a known factor that depends on the shell radii ratio. When the applied field is non-uniform, as for dipolar magnetic field sources, field gradient is increased as well. A proof-of-concept experimental realization confirms the theoretical predictions. The metamaterial shell is also shown to concentrate time-dependent magnetic fields upto frequencies of 100 kHz.

  10. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  11. Proposal of a taste evaluating method of the sponge cake by using 3D range sensor

    NASA Astrophysics Data System (ADS)

    Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko

    2002-10-01

    Nowadays, the image processing techniques are while applying to the food industry in many situations. The most of these researches are applications for the quality control in plants, and there are hardly any cases of measuring the 'taste'. We are developing the measuring system of the deliciousness by using the image sensing. In this paper, we propose the estimation method of the deliciousness of a sponge cake. Considering about the deliciousness of the sponge cake, if the size of the bubbles on the surface is small and the number of them is large, then it is defined that the deliciousness of the sponge cake is better in the field of the food science. We proposed a method of detection bubbles in the surface of the sectional sponge cake automatically by using 3-D image processing. By the statistical information of these detected bubbles based on the food science, the deliciousness is estimated.

  12. 3-D sensor using relative stereo method for bio-seedlings transplanting system

    NASA Astrophysics Data System (ADS)

    Hiroyasu, Takehisa; Hayashi, Jun'ichiro; Hojo, Hirotaka; Hata, Seiji

    2005-12-01

    In the plant factory of crone seedlings, most of the production processes are highly automated, but the transplanting process of the small seedlings is hard to be automated because the figures of small seedlings are not stable and to handle the seedlings it is required to observe the shapes of the small seedlings. Here, a 3-D vision system for robot to be used for the transplanting process in a plant factory has been introduced. This system has been employed relative stereo method and slit light measuring method and it can detect the shape of small seedlings and decides the cutting point. In this paper, the structure of the vision system and the image processing method for the system is explained.

  13. Lidar

    NASA Technical Reports Server (NTRS)

    Collis, R. T. H.

    1969-01-01

    Lidar is an optical radar technique employing laser energy. Variations in signal intensity as a function of range provide information on atmospheric constituents, even when these are too tenuous to be normally visible. The theoretical and technical basis of the technique is described and typical values of the atmospheric optical parameters given. The significance of these parameters to atmospheric and meteorological problems is discussed. While the basic technique can provide valuable information about clouds and other material in the atmosphere, it is not possible to determine particle size and number concentrations precisely. There are also inherent difficulties in evaluating lidar observations. Nevertheless, lidar can provide much useful information as is shown by illustrations. These include lidar observations of: cirrus cloud, showing mountain wave motions; stratification in clear air due to the thermal profile near the ground; determinations of low cloud and visibility along an air-field approach path; and finally the motion and internal structure of clouds of tracer materials (insecticide spray and explosion-caused dust) which demonstrate the use of lidar for studying transport and diffusion processes.

  14. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    PubMed Central

    Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin

    2015-01-01

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168

  15. Modular optical topometric sensor for 3D acquisition of human body surfaces and long-term monitoring of variations.

    PubMed

    Bischoff, Guido; Böröcz, Zoltan; Proll, Christian; Kleinheinz, Johannes; von Bally, Gert; Dirksen, Dieter

    2007-08-01

    Optical topometric 3D sensors such as laser scanners and fringe projection systems allow detailed digital acquisition of human body surfaces. For many medical applications, however, not only the current shape is important, but also its changes, e.g., in the course of surgical treatment. In such cases, time delays of several months between subsequent measurements frequently occur. A modular 3D coordinate measuring system based on the fringe projection technique is presented that allows 3D coordinate acquisition including calibrated color information, as well as the detection and visualization of deviations between subsequent measurements. In addition, parameters describing the symmetry of body structures are determined. The quantitative results of the analysis may be used as a basis for objective documentation of surgical therapy. The system is designed in a modular way, and thus, depending on the object of investigation, two or three cameras with different capabilities in terms of resolution and color reproduction can be utilized to optimize the set-up.

  16. An analogue contact probe using a compact 3D optical sensor for micro/nano coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Fan, Kuang-Chao; Miao, Jin-Wei; Huang, Qiang-Xian; Tao, Sheng; Gong, Er-min

    2014-09-01

    This paper presents a new analogue contact probe based on a compact 3D optical sensor with high precision. The sensor comprises an autocollimator and a polarizing Michelson interferometer, which can detect two angles and one displacement of the plane mirror at the same time. In this probe system, a tungsten stylus with a ruby tip-ball is attached to a floating plate, which is supported by four V-shape leaf springs fixed to the outer case. When a contact force is applied to the tip, the leaf springs will experience elastic deformation and the plane mirror mounted on the floating plate will be displaced. The force-motion characteristics of this probe were investigated and optimum parameters were obtained with the constraint of allowable physical size of the probe. Simulation results show that the probe is uniform in 3D and its contacting force gradient is within 1 mN µm - 1. Experimental results indicate that the probe has 1 nm resolution,  ± 10 µm measuring range in X - Y plane, 10 µm measuring range in Z direction and within 30 nm measuring standard deviation. The feasibility of the probe has been preliminarily verified by testing the flatness and step height of high precision gauge blocks.

  17. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  18. Lidar

    NASA Astrophysics Data System (ADS)

    Sage, J.-P.; Aubry, Y.

    1981-09-01

    It is noted that a photodetector at the telescope focal plane of a lidar produces a signal which is processed, giving information on the concentration of the species being monitored. The delay between the emitted and return signals indicates the distance to the interacting volume. Because of the poor efficiency of the interaction processes, the main difficulty in developing a good lidar has to do with the availability of sufficiently efficient lasers. Certain laser characteristics are discussed, and a CNES program for the development of lasers for lidar techniques is presented, future space applications being considered as mid-term objectives. The various components of the laser system developed by CNES are described. These are a dual frequency tunable oscillator, the amplifier chain, the beam control unit and wavelength servo-system, and the harmonic conversion subsystem.

  19. Retrieval of Vegetation Structure and Carbon Balance Parameters Using Ground-Based Lidar and Scaling to Airborne and Spaceborne Lidar Sensors

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Ni-Meister, W.; Woodcock, C. E.; Li, X.; Jupp, D. L.; Culvenor, D.

    2006-12-01

    This research uses a ground-based, upward hemispherical scanning lidar to retrieve forest canopy structural information, including tree height, mean tree diameter, basal area, stem count density, crown diameter, woody biomass, and green biomass. These parameters are then linked to airborne and spaceborne lidars to provide large-area mapping of structural and biomass parameters. The terrestrial lidar instrument, Echidna(TM), developed by CSIRO Australia, allows rapid acquisition of vegetation structure data that can be readily integrated with downward-looking airborne lidar, such as LVIS (Laser Vegetation Imaging Sensor), and spaceborne lidar, such as GLAS (Geoscience Laser Altimeter System) on ICESat. Lidar waveforms and vegetation structure are linked for these three sensors through the hybrid geometric-optical radiative-transfer (GORT) model, which uses basic vegetation structure parameters and principles of geometric optics, coupled with radiative transfer theory, to model scattering and absorption of light by collections of individual plant crowns. Use of a common model for lidar waveforms at ground, airborne, and spaceborne levels facilitates integration and scaling of the data to provide large-area maps and inventories of vegetation structure and carbon stocks. Our research plan includes acquisition of Echidna(TM) under-canopy hemispherical lidar scans at North American test sites where LVIS and GLAS data have been or are being acquired; analysis and modeling of spatially coincident lidar waveforms acquired by the three sensor systems; linking of the three data sources using the GORT model; and mapping of vegetation structure and carbon-balance parameters at LVIS and GLAS resolutions based on Echidna(TM) measurements.

  20. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  1. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  2. A Compact 3D Omnidirectional Range Sensor of High Resolution for Robust Reconstruction of Environments

    PubMed Central

    Marani, Roberto; Renò, Vito; Nitti, Massimiliano; D'Orazio, Tiziana; Stella, Ettore

    2015-01-01

    In this paper, an accurate range sensor for the three-dimensional reconstruction of environments is designed and developed. Following the principles of laser profilometry, the device exploits a set of optical transmitters able to project a laser line on the environment. A high-resolution and high-frame-rate camera assisted by a telecentric lens collects the laser light reflected by a parabolic mirror, whose shape is designed ad hoc to achieve a maximum measurement error of 10 mm when the target is placed 3 m away from the laser source. Measurements are derived by means of an analytical model, whose parameters are estimated during a preliminary calibration phase. Geometrical parameters, analytical modeling and image processing steps are validated through several experiments, which indicate the capability of the proposed device to recover the shape of a target with high accuracy. Experimental measurements show Gaussian statistics, having standard deviation of 1.74 mm within the measurable range. Results prove that the presented range sensor is a good candidate for environmental inspections and measurements. PMID:25621605

  3. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    PubMed

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-12-29

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy.

  4. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters

    PubMed Central

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments’ performance and survey accuracy. PMID:26729117

  5. Qualification of a 3D structured light sensor for a reverse engineering application

    NASA Astrophysics Data System (ADS)

    Guarato, Alexandre Z.; Loja, Alexandre C.; Pereira, Leonardo P.; Braga, Sergio L.; Trevilato, Thales R. B.

    2016-11-01

    This paper deals with the qualification of a 3D structured light scanning system for an application of reverse engineering of a mechanical part. As this white light scanner is an electro-optical device and based on the principle of optical triangulation, the measurement accuracy is affected by the measured part geometry and its position within the scanning window. The effects of the scan depth and the projected angle, characterizing the surface normal of the measured surface to the scanning point of view, on the measurement of accuracy are not considered in the standard calibration process of manufacturers and have been identified by experiments in the present work. The digitization errors are analyzed and characterized thanks to a measurement protocol based on quality indicators. Theses quality indicators are evaluated thanks to simple calibrated artifacts. The aim of this work is to redefine the ideal relative distance and relative angle for minimizing the digitizing errors in relation to those stated by the manufacturer for a reverse engineering application.

  6. Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility

    PubMed Central

    Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios

    2016-01-01

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373

  7. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  8. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and

  9. a New Automatic System Calibration of Multi-Cameras and LIDAR Sensors

    NASA Astrophysics Data System (ADS)

    Hassanein, M.; Moussa, A.; El-Sheimy, N.

    2016-06-01

    In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated calibration without

  10. Underwater monitoring experiment using hyperspectral sensor, LiDAR and high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Yang, Chan-Su; Kim, Sun-Hwa

    2014-10-01

    In general, hyper-spectral sensor, LiDAR and high spatial resolution satellite imagery for underwater monitoring are dependent on water clarity or water transparency that can be measured using a Secchi disk or satellite ocean color data. Optical properties in the sea waters of South Korea are influenced mainly by a strong tide and oceanic currents, diurnal, daily and seasonal variations of water transparency. The satellite-based Secchi depth (ZSD) analysis showed the applicability of hyper-spectral sensor, LiDAR and optical satellite, determined by the location connected with the local distribution of Case 1 and 2 waters. The southeast coastal areas of Jeju Island are selected as test sites for a combined underwater experiment, because those areas represent Case 1 water. Study area is a small port (<15m) in the southeast area of the island and linear underwater target used by sewage pipe is located in this area. Our experiments are as follows: 1. atmospheric and sun-glint correction methods to improve the underwater monitoring ability; 2. intercomparison of water depths obtained from three different sensors. Three sensors used here are the CASI-1500 (Wide-Array Airborne Hyperspectral VNIR Imager (0.38-1.05 microns), the Coastal Zone Mapping and Imaging Lidar (CZMIL) and Korean Multi-purpose Satellite-3 (KOMPSAT-3) with 2.8 meter multi-spectral resolution. The experimental results were affected by water clarity and surface condition, and the bathymetric results of three sensors show some differences caused by sensor-itself, bathymetric algorithm and tide level. It is shown that CASI-1500 was applicable for bathymetry and underwater target detection in this area, but KOMPSAT-3 should be improved for Case 1 water. Although this experiment was designed to compare underwater monitoring ability of LIDAR, CASI-1500, KOMPSAT-3 data, this paper was based on initial results and suggested only results about the bathymetry and underwater target detection.

  11. Multi-sensor super-resolution for hybrid range imaging with application to 3-D endoscopy and open surgery.

    PubMed

    Köhler, Thomas; Haase, Sven; Bauer, Sebastian; Wasza, Jakob; Kilgus, Thomas; Maier-Hein, Lena; Stock, Christian; Hornegger, Joachim; Feußner, Hubertus

    2015-08-01

    In this paper, we propose a multi-sensor super-resolution framework for hybrid imaging to super-resolve data from one modality by taking advantage of additional guidance images of a complementary modality. This concept is applied to hybrid 3-D range imaging in image-guided surgery, where high-quality photometric data is exploited to enhance range images of low spatial resolution. We formulate super-resolution based on the maximum a-posteriori (MAP) principle and reconstruct high-resolution range data from multiple low-resolution frames and complementary photometric information. Robust motion estimation as required for super-resolution is performed on photometric data to derive displacement fields of subpixel accuracy for the associated range images. For improved reconstruction of depth discontinuities, a novel adaptive regularizer exploiting correlations between both modalities is embedded to MAP estimation. We evaluated our method on synthetic data as well as ex-vivo images in open surgery and endoscopy. The proposed multi-sensor framework improves the peak signal-to-noise ratio by 2 dB and structural similarity by 0.03 on average compared to conventional single-sensor approaches. In ex-vivo experiments on porcine organs, our method achieves substantial improvements in terms of depth discontinuity reconstruction.

  12. FLASH LIDAR Based Relative Navigation

    NASA Technical Reports Server (NTRS)

    Brazzel, Jack; Clark, Fred; Milenkovic, Zoran

    2014-01-01

    Relative navigation remains the most challenging part of spacecraft rendezvous and docking. In recent years, flash LIDARs, have been increasingly selected as the go-to sensors for proximity operations and docking. Flash LIDARS are generally lighter and require less power that scanning Lidars. Flash LIDARs do not have moving parts, and they are capable of tracking multiple targets as well as generating a 3D map of a given target. However, there are some significant drawbacks of Flash Lidars that must be resolved if their use is to be of long-term significance. Overcoming the challenges of Flash LIDARs for navigation-namely, low technology readiness level, lack of historical performance data, target identification, existence of false positives, and performance of vision processing algorithms as intermediaries between the raw sensor data and the Kalman filter-requires a world-class testing facility, such as the Lockheed Martin Space Operations Simulation Center (SOSC). Ground-based testing is a critical step for maturing the next-generation flash LIDAR-based spacecraft relative navigation. This paper will focus on the tests of an integrated relative navigation system conducted at the SOSC in January 2014. The intent of the tests was to characterize and then improve the performance of relative navigation, while addressing many of the flash LIDAR challenges mentioned above. A section on navigation performance and future recommendation completes the discussion.

  13. On-machine measurement of the grinding wheels' 3D surface topography using a laser displacement sensor

    NASA Astrophysics Data System (ADS)

    Pan, Yongcheng; Zhao, Qingliang; Guo, Bing

    2014-08-01

    A method of non-contact, on-machine measurement of three dimensional surface topography of grinding wheels' whole surface was developed in this paper, focusing on an electroplated coarse-grained diamond grinding wheel. The measuring system consists of a Keyence laser displacement sensor, a Keyence controller and a NI PCI-6132 data acquisition card. A resolution of 0.1μm in vertical direction and 8μm in horizontal direction could be achieved. After processing the data by LabVIEW and MATLAB, the 3D topography of the grinding wheel's whole surface could be reconstructed. When comparing the reconstructed 3D topography of the grinding wheel's marked area to its real topography captured by a high-depth-field optical digital microscope (HDF-ODM) and scanning electron microscope (SEM), they were very similar to each other, proving that this method is accurate and effective. By a subsequent data processing, the topography of every grain could be extracted and then the active grain number, the active grain volume and the active grain's bearing ration could be calculated. These three parameters could serve as the criterion to evaluate the grinding performance of coarse-grained diamond grinding wheels. Then the performance of the grinding wheel could be evaluated on-machine accurately and quantitatively.

  14. Digital holographic interferometer using simultaneously three lasers and a single monochrome sensor for 3D displacement measurements.

    PubMed

    Saucedo-A, Tonatiuh; De la Torre-Ibarra, M H; Santoyo, F Mendoza; Moreno, Ivan

    2010-09-13

    The use of digital holographic interferometry for 3D measurements using simultaneously three illumination directions was demonstrated by Saucedo et al. (Optics Express 14(4) 2006). The technique records two consecutive images where each one contains three holograms in it, e.g., one before the deformation and one after the deformation. A short coherence length laser must be used to obtain the simultaneous 3D information from the same laser source. In this manuscript we present an extension of this technique now illuminating simultaneously with three different lasers at 458, 532 and 633 nm, and using only one high resolution monochrome CMOS sensor. This new configuration gives the opportunity to use long coherence length lasers allowing the measurement of large object areas. A series of digital holographic interferograms are recorded and the information corresponding to each laser is isolated in the Fourier spectral domain where the corresponding phase difference is calculated. Experimental results render the orthogonal displacement components u, v and w during a simple load deformation.

  15. Fully back-end TSV process by Cu electro-less plating for 3D smart sensor systems

    NASA Astrophysics Data System (ADS)

    Santagata, F.; Farriciello, C.; Fiorentino, G.; van Zeijl, H. W.; Silvestri, C.; Zhang, G. Q.; Sarro, P. M.

    2013-05-01

    A fully back-end process for high-aspect ratio through-silicon vias (TSVs) for 3D smart sensor systems is developed. Atomic layer deposition of TiN provides a highly conformal barrier as well as a seed layer for metal plating. Cu electro-less plating on the chemically activated TiN surfaces is applied to uniformly fill the TSVs in a significantly shorter time (2 h for 300 μm deep and 20 μm wide TSVs) than with Cu bottom-up electroplating (>20 h). The process is CMOS compatible and can be performed after the last metalization step, making it a fully back-end process (VIA-last approach). Wafers containing metal interconnections on both sides are in fact used as demonstrator. Four-terminal 3D Kelvin structures are fabricated and characterized. An average resistance value of 650 mΩ is measured for 300 μm deep TSVs with an aspect ratio of 15. The crosstalk between adjacent TSVs is also measured by means of S-parameters characterization on dedicated RF test structures. The closest TSVs (75 μm) show a reciprocal crosstalk of less than -20 dB at 30 GHz.

  16. 3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot.

    PubMed

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly "coupled" as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a "zero" or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy.

  17. Coupling high resolution 3D point clouds from terrestrial LiDAR with high precision displacement time series from GB-InSAR to understand landslide kinematic: example of the La Perraire instability, Swiss Alps.

    NASA Astrophysics Data System (ADS)

    Michoud, Clément; Baillifard, François; Harald Blikra, Lars; Derron, Marc-Henri; Jaboyedoff, Michel; Kristensen, Lene; Leva, Davide; Metzger, Richard; Rivolta, Carlo

    2014-05-01

    Terrestrial Laser Scanning and Ground-Based Radar Interferometry have changed our perception and interpretation of slope activities for the last 20 years and are now routinely used for monitoring and even early warning purposes. Terrestrial LiDAR allows indeed to model topography with very high point density, even in steep slopes, and to extract 3D displacements of rock masses by comparing successive datasets. GB-InSAR techniques are able to detect mm displacements over large areas. Nevertheless, both techniques suffer of some limitations. The precision of LiDAR devices actually limits its ability to monitor very slow-moving landslides, as well as by the dam resolution and the particular geometry (in azimuth/range) of GB-InSAR data may complicate their interpretations. To overcome those limitations, tools were produced to truly combine strong advantages of both techniques, by coupling high resolution geometrical data from terrestrial LiDAR or photogrammetry with high precision displacement time series from GB-InSAR. We thus developed a new exportation module into the processing chain of LiSAmobile (GB-InSAR) devices in order to wrap radar results from their particular geometry on high resolution 3D point clouds with cm mean point spacing. Furthermore, we also added new importation and visualization functionalities into Coltop3D (software for geological interpretations of laser scanning data) to display those results in 3D and even analyzing displacement time series. This new method has also been optimized to create as few and small files as possible and for time processing. Advantages of coupling terrestrial LiDAR and GB-InSAR data will be illustrated on the La Perraire instability, an active large rockslide involving frequent rockfalls and threatening inhabitant within the Val de Bagnes in the Swiss Alps. This rock mass, monitored by LiDAR and GPS since 2006, is huge enough and long-term movements are big (up to 1.6 m in 6 years) and complex enough to make

  18. Triboelectric nanogenerator built on suspended 3D spiral structure as vibration and positioning sensor and wave energy harvester.

    PubMed

    Hu, Youfan; Yang, Jin; Jing, Qingshen; Niu, Simiao; Wu, Wenzhuo; Wang, Zhong Lin

    2013-11-26

    An unstable mechanical structure that can self-balance when perturbed is a superior choice for vibration energy harvesting and vibration detection. In this work, a suspended 3D spiral structure is integrated with a triboelectric nanogenerator (TENG) for energy harvesting and sensor applications. The newly designed vertical contact-separation mode TENG has a wide working bandwidth of 30 Hz in low-frequency range with a maximum output power density of 2.76 W/m(2) on a load of 6 MΩ. The position of an in-plane vibration source was identified by placing TENGs at multiple positions as multichannel, self-powered active sensors, and the location of the vibration source was determined with an error less than 6%. The magnitude of the vibration is also measured by the output voltage and current signal of the TENG. By integrating the TENG inside a buoy ball, wave energy harvesting at water surface has been demonstrated and used for lighting illumination light, which shows great potential applications in marine science and environmental/infrastructure monitoring.

  19. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  20. Capturing 3D resistivity of semi-arid karstic subsurface in varying moisture conditions using a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Barnhart, K.; Oden, C. P.

    2012-12-01

    The dissolution of soluble bedrock results in surface and subterranean karst channels, which comprise 7-10% of the dry earth's surface. Karst serves as a preferential conduit to focus surface and subsurface water but it is difficult to exploit as a water resource or protect from pollution because of irregular structure and nonlinear hydrodynamic behavior. Geophysical characterization of karst commonly employs resistivity and seismic methods, but difficulties arise due to low resistivity contrast in arid environments and insufficient resolution of complex heterogeneous structures. To help reduce these difficulties, we employ a state-of-the-art wireless geophysical sensor array, which combines low-power radio telemetry and solar energy harvesting to enable long-term in-situ monitoring. The wireless aspect removes topological constraints common with standard wired resistivity equipment, which facilitates better coverage and/or sensor density to help improve aspect ratio and resolution. Continuous in-situ deployment allows data to be recorded according to nature's time scale; measurements are made during infrequent precipitation events which can increase resistivity contrast. The array is coordinated by a smart wireless bridge that continuously monitors local soil moisture content to detect when precipitation occurs, schedules resistivity surveys, and periodically relays data to the cloud via 3G cellular service. Traditional 2/3D gravity and seismic reflection surveys have also been conducted to clarify and corroborate results.

  1. Configuration of a sparse network of LIDAR sensors to identify security-relevant behavior of people

    NASA Astrophysics Data System (ADS)

    Wenzl, Konrad; Ruser, Heinrich; Kargel, Christian

    2009-09-01

    Surveillance is an important application of sensor networks. In this paper it is demonstrated how a sparse network of stationary infrared (IR) sensors with highly directional, stationary beam patterns based on the LIDAR principle can be used to reliably track persons. Due to the small number of sensors and their narrow beam patterns a significant portion of the area to be surveilled is not directly assessed by the sensors. To nonetheless achieve reliable tracking of moving targets in the entire area to be monitored, we employ the most appropriate sensor network configuration and propose a probabilistic tracking approach. The behavior of a person moving through the area of observation is classified as "normal" or "abnormal" depending upon the trajectory and motion dynamics. The classification is based on a linear Kalman prediction.

  2. Relative Navigation Light Detection and Ranging (LIDAR) Sensor Development Test Objective (DTO) Performance Verification

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2013-01-01

    The NASA Engineering and Safety Center (NESC) received a request from the NASA Associate Administrator (AA) for Human Exploration and Operations Mission Directorate (HEOMD), to quantitatively evaluate the individual performance of three light detection and ranging (LIDAR) rendezvous sensors flown as orbiter's development test objective on Space Transportation System (STS)-127, STS-133, STS-134, and STS-135. This document contains the outcome of the NESC assessment.

  3. Doppler lidar sensor for precision navigation in GPS-deprived environment

    NASA Astrophysics Data System (ADS)

    Amzajerdian, F.; Pierrottet, D. F.; Hines, G. D.; Petway, L. B.; Barnes, B. W.

    2013-05-01

    Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle's Inertial Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration onboard a rocket-powered terrestrial free-flyer vehicle.

  4. Doppler Lidar Sensor for Precision Navigation in GPS-Deprived Environment

    NASA Technical Reports Server (NTRS)

    Amzajerdian, F.; Pierrottet, D. F.; Hines, G. D.; Hines, G. D.; Petway, L. B.; Barnes, B. W.

    2013-01-01

    Landing mission concepts that are being developed for exploration of solar system bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe, soft landing at the pre-designated sites. Data from the vehicle's Inertial Measurement Unit will not be sufficient due to significant drift error after extended travel time in space. Therefore, an onboard sensor is required to provide the necessary data for landing in the GPS-deprived environment of space. For this reason, NASA Langley Research Center has been developing an advanced Doppler lidar sensor capable of providing accurate and reliable data suitable for operation in the highly constrained environment of space. The Doppler lidar transmits three laser beams in different directions toward the ground. The signal from each beam provides the platform velocity and range to the ground along the laser line-of-sight (LOS). The six LOS measurements are then combined in order to determine the three components of the vehicle velocity vector, and to accurately measure altitude and attitude angles relative to the local ground. These measurements are used by an autonomous Guidance, Navigation, and Control system to accurately navigate the vehicle from a few kilometers above the ground to the designated location and to execute a gentle touchdown. A prototype version of our lidar sensor has been completed for a closed-loop demonstration onboard a rocket-powered terrestrial free-flyer vehicle.

  5. Navigation Doppler Lidar Sensor for Precision Altitude and Vector Velocity Measurements Flight Test Results

    NASA Technical Reports Server (NTRS)

    Pierrottet, Diego F.; Lockhard, George; Amzajerdian, Farzin; Petway, Larry B.; Barnes, Bruce; Hines, Glenn D.

    2011-01-01

    An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over vegetation free terrain. The sensor was one of several sensors tested in this field test by NASA?s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.

  6. Using Arduinos and 3D-printers to Build Research-grade Weather Stations and Environmental Sensors

    NASA Astrophysics Data System (ADS)

    Ham, J. M.

    2013-12-01

    Many plant, soil, and surface-boundary-layer processes in the geosphere are governed by the microclimate at the land-air interface. Environmental monitoring is needed at smaller scales and higher frequencies than provided by existing weather monitoring networks. The objective of this project was to design, prototype, and test a research-grade weather station that is based on open-source hardware/software and off-the-shelf components. The idea is that anyone could make these systems with only elementary skills in fabrication and electronics. The first prototypes included measurements of air temperature, humidity, pressure, global irradiance, wind speed, and wind direction. The best approach for measuring precipitation is still being investigated. The data acquisition system was deigned around the Arduino microcontroller and included an LCD-based user interface, SD card data storage, and solar power. Sensors were sampled at 5 s intervals and means, standard deviations, and maximum/minimums were stored at user-defined intervals (5, 30, or 60 min). Several of the sensor components were printed in plastic using a hobby-grade 3D printer (e.g., RepRap Project). Both passive and aspirated radiation shields for measuring air temperature were printed in white Acrylonitrile Butadiene Styrene (ABS). A housing for measuring solar irradiance using a photodiode-based pyranometer was printed in opaque ABS. The prototype weather station was co-deployed with commercial research-grade instruments at an agriculture research unit near Fort Collins, Colorado, USA. Excellent agreement was found between Arduino-based system and commercial weather instruments. The technology was also used to support air quality research and automated air sampling. The next step is to incorporate remote access and station-to-station networking using Wi-Fi, cellular phone, and radio communications (e.g., Xbee).

  7. 3D measurements of alpine skiing with an inertial sensor motion capture suit and GNSS RTK system.

    PubMed

    Supej, Matej

    2010-05-01

    To date, camcorders have been the device of choice for 3D kinematic measurement in human locomotion, in spite of their limitations. This study examines a novel system involving a GNSS RTK that returns a reference trajectory through the use of a suit, imbedded with inertial sensors, to reveal subject segment motion. The aims were: (1) to validate the system's precision and (2) to measure an entire alpine ski race and retrieve the results shortly after measuring. For that purpose, four separate experiments were performed: (1) forced pendulum, (2) walking, (3) gate positions, and (4) skiing experiments. Segment movement validity was found to be dependent on the frequency of motion, with high accuracy (0.8 degrees , s = 0.6 degrees ) for 10 s, which equals approximately 10 slalom turns, while accuracy decreased slightly (2.1 degrees , 3.3 degrees , and 4.2 degrees for 0.5, 1, and 2 Hz oscillations, respectively) during 35 s of data collection. The motion capture suit's orientation inaccuracy was mostly due to geomagnetic secular variation. The system exhibited high validity regarding the reference trajectory (0.008 m, s = 0.0044) throughout an entire ski race. The system is capable of measuring an entire ski course with less manpower and therefore lower cost compared with camcorder-based techniques.

  8. Sensor fusion of 2D and 3D data for the processing of images of dental imprints

    NASA Astrophysics Data System (ADS)

    Methot, Jean-Francois; Mokhtari, Marielle; Laurendeau, Denis; Poussart, Denis

    1993-08-01

    This paper presents a computer vision system for the acquisition and processing of 3-D images of wax dental imprints. The ultimate goal of the system is to measure a set of 10 orthodontic parameters that will be fed to an expert system for automatic diagnosis of occlusion problems. An approach for the acquisition of range images of both sides of the imprint is presented. Range is obtained from a shape-from-absorption technique applied to a pair of grey-level images obtained at two different wavelengths. The accuracy of the range values is improved using sensor fusion between the initial range image and a reflectance image from the pair of grey-level images. The improved range image is segmented in order to find the interstices between teeth and, following further processing, the type of each tooth on the profile. Once each tooth has been identified, its accurate location on the imprint is found using a region- growing approach and its shape is reconstructed with third degree polynomial functions. The reconstructed shape will be later used by the system to find specific features that are needed to estimate the orthodontic parameters.

  9. 3D Radiative Transfer Effects in Multi-Angle/Multi-Spectral Radio-Polarimetric Signals from a Mixture of Clouds and Aerosols Viewed by a Non-Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Garay, Michael J.; Xu, Feng; Qu, Zheng; Emde, Claudia

    2013-01-01

    When observing a spatially complex mix of aerosols and clouds in a single relatively large field-of-view, nature entangles their signals non-linearly through polarized radiation transport processes that unfold in the 3D position and direction spaces. In contrast, any practical forward model in a retrieval algorithm will use only 1D vector radiative transfer (vRT) in a linear mixing technique. We assess the difference between the observed and predicted signals using synthetic data from a high-fidelity 3D vRT model with clouds generated using a Large Eddy Simulation model and an aerosol climatology. We find that this difference is signal--not noise--for the Aerosol Polarimetry Sensor (APS), an instrument developed by NASA. Moreover, the worst case scenario is also the most interesting case, namely, when the aerosol burden is large, hence hase the most impact on the cloud microphysics and dynamics. Based on our findings, we formulate a mitigation strategy for these unresolved cloud adjacency effects assuming that some spatial information is available about the structure of the clouds at higher resolution from "context" cameras, as was planned for NASA's ill-fated Glory mission that was to carry the APS but failed to reach orbit. Application to POLDER (POLarization and Directionality of Earth Reflectances) data from the period when PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) was in the A-train is briefly discussed.

  10. A fluorescence LIDAR sensor for hyper-spectral time-resolved remote sensing and mapping.

    PubMed

    Palombi, Lorenzo; Alderighi, Daniele; Cecchi, Giovanna; Raimondi, Valentina; Toci, Guido; Lognoli, David

    2013-06-17

    In this work we present a LIDAR sensor devised for the acquisition of time resolved laser induced fluorescence spectra. The gating time for the acquisition of the fluorescence spectra can be sequentially delayed in order to achieve fluorescence data that are resolved both in the spectral and temporal domains. The sensor can provide sub-nanometric spectral resolution and nanosecond time resolution. The sensor has also imaging capabilities by means of a computer-controlled motorized steering mirror featuring a biaxial angular scanning with 200 μradiant angular resolution. The measurement can be repeated for each point of a geometric grid in order to collect a hyper-spectral time-resolved map of an extended target.

  11. On non-invasive 2D and 3D Chromatic White Light image sensors for age determination of latent fingerprints.

    PubMed

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-10-10

    The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5 h] and [5, 24 h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme.

  12. Doppler Lidar Sensor for Precision Landing on the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry; Hines, Glenn; Barnes, Bruce; Pierrottet, Diego; Lockhard, George

    2012-01-01

    Landing mission concepts that are being developed for exploration of planetary bodies are increasingly ambitious in their implementations and objectives. Most of these missions require accurate position and velocity data during their descent phase in order to ensure safe soft landing at the pre-designated sites. To address this need, a Doppler lidar is being developed by NASA under the Autonomous Landing and Hazard Avoidance (ALHAT) project. This lidar sensor is a versatile instrument capable of providing precision velocity vectors, vehicle ground relative altitude, and attitude. The capabilities of this advanced technology have been demonstrated through two helicopter flight test campaigns conducted over a vegetation-free terrain in 2008 and 2010. Presently, a prototype version of this sensor is being assembled for integration into a rocket-powered terrestrial free-flyer vehicle. Operating in a closed loop with vehicle's guidance and navigation system, the viability of this advanced sensor for future landing missions will be demonstrated through a series of flight tests in 2012.

  13. NASA/LMSC coherent LIDAR airborne shear sensor: System capabilities and flight test plans

    NASA Technical Reports Server (NTRS)

    Robinson, Paul

    1992-01-01

    The primary objective of the NASA/LMSC Coherent Lidar Airborne Shear Sensor (CLASS) system flight tests is to evaluate the capability of an airborne coherent lidar system to detect, measure, and predict hazardous wind shear ahead of the aircraft with a view to warning flight crew of any impending dangers. On NASA's Boeing 737 Transport Systems Research Vehicle, the CLASS system will be used to measure wind velocity fields and, by incorporating such measurements with real-time aircraft state parameters, identify regions of wind shear that may be detrimental to the aircraft's performance. Assessment is to be made through actual wind shear encounters in flight. Wind shear measurements made by the class system will be compared to those made by the aircraft's in situ wind shear detection system as well as by ground-based Terminal Doppler Weather Radar (TDWR) and airborne Doppler radar. By examining the aircraft performance loss (or gain) due to wind shear that the lidar predicts with that actually experienced by the aircraft, the performance of the CLASS system as a predictive wind shear detector will be assessed.

  14. Analytical and simulation results of a triple micro whispering gallery mode probe system for a 3D blood flow rate sensor.

    PubMed

    Phatharacorn, Prateep; Chiangga, Surasak; Yupapin, Preecha

    2016-11-20

    The whispering gallery mode (WGM) is generated by light propagating within a nonlinear micro-ring resonator, which is modeled and made by an InGaAsP/InP material, and called a Panda ring resonator. An imaging probe can also be formed by the micro-conjugate mirror function for the appropriate Panda ring parameter control. The 3D WGM probe can be generated and used for a 3D sensor head and imaging probe. The analytical details and simulation results are given, in which the simulation results are obtained by using the MATLAB and Optiwave programs. From the obtained results, such a design system can be configured to be a thin-film sensor system that can contact the sample surface for the required measurements The outputs of the system are in the form of a WGM beam, in which the 3D WGM probe is also available with the micro-conjugate mirror function. Such a 3D probe can penetrate into the blood vessel and content, from which the time delay among those probes can be detected and measured, and where finally the blood flow rate can be calculated and the blood content 3D image can also be seen and used for medical diagnosis. The tested results have shown that the blood flow rate of 0.72-1.11  μs-1, with the blood density of 1060  kgm-3, can be obtained.

  15. Overview of the first Multicenter Airborne Coherent Atmospheric Wind Sensor (MACAWS) experiment: conversion of a ground-based lidar for airborne applications

    NASA Astrophysics Data System (ADS)

    Howell, James N.; Hardesty, R. Michael; Rothermel, Jeffrey; Menzies, Robert T.

    1996-11-01

    The first Multi center Airborne Coherent Atmospheric Wind Sensor (MACAWS) field experiment demonstrated an airborne high energy TEA CO2 Doppler lidar system for measurement of atmospheric wind fields and aerosol structure. The system was deployed on the NASA DC-8 during September 1995 in a series of checkout flights to observe several important atmospheric phenomena, including upper level winds in a Pacific hurricane, marine boundary layer winds, cirrus cloud properties, and land-sea breeze structure. The instrument, with its capability to measure 3D winds and backscatter fields, promises to be a valuable tool for climate and global change, severe weather, and air quality research. In this paper, we describe the airborne instrument, assess its performance, discuss future improvements, and show some preliminary results from the September experiments.

  16. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  17. Temporal-spatial reach parameters derived from inertial sensors: Comparison to 3D marker-based motion capture.

    PubMed

    Cahill-Rowley, Katelyn; Rose, Jessica

    2017-02-08

    Reaching is a well-practiced functional task crucial to daily living activities, and temporal-spatial measures of reaching reflect function for both adult and pediatric populations with upper-extremity motor impairments. Inertial sensors offer a mobile and inexpensive tool for clinical assessment of movement. This research outlines a method for measuring temporal-spatial reach parameters using inertial sensors, and validates these measures with traditional marker-based motion capture. 140 reaches from 10 adults, and 30 reaches from nine children aged 18-20 months, were recorded and analyzed using both inertial-sensor and motion-capture methods. Inertial sensors contained three-axis accelerometers, gyroscopes, and magnetometers. Gravitational offset of accelerometer data was measured when the sensor was at rest, and removed using sensor orientation measured at rest and throughout the reach. Velocity was calculated by numeric integration of acceleration, using a null-velocity assumption at reach start. Sensor drift was neglected given the 1-2s required for a reach. Temporal-spatial reach parameters were calculated independently for each data acquisition method. Reach path length and distance, peak velocity magnitude and timing, and acceleration at contact demonstrated consistent agreement between sensor- and motion-capture-based methods, for both adult and toddler reaches, as evaluated by intraclass correlation coefficients from 0.61 to 1.00. Taken together with actual difference between method measures, results indicate that these functional reach parameters may be reliably measured with inertial sensors.

  18. A heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in 3D space

    NASA Astrophysics Data System (ADS)

    Lin, Hong; Tanner, Steve; Rushing, John; Graves, Sara; Criswell, Evans

    2008-03-01

    Large scale sensor networks composed of many low-cost small sensors networked together with a small number of high fidelity position sensors can provide a robust, fast and accurate air defense and warning system. The team has been developing simulations of such large networks, and is now adding terrain data in an effort to provide more realistic analysis of the approach. This work, a heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in a three-dimensional environment is presented. The sensor network can be composed of large numbers of low fidelity binary and bearing-only sensors, and small numbers of high fidelity position sensors, such as radars. The binary and bearing-only sensors are randomly distributed over a large geographic region; while the position sensors are distributed evenly. The elevations of the sensors are determined through the use of DTED Level 0 dataset. The targets are located through fusing measurement information from all types of sensors modeled by the simulation. The network simulation utilizes the same search-based optimization algorithm as in our previous two-dimensional sensor network simulation with some significant modifications. The fusion algorithm is parallelized using spatial decomposition approach: the entire surveillance area is divided into small regions and each region is assigned to one compute node. Each node processes sensor measurements and terrain data only for the assigned sub region. A master process combines the information from all the compute nodes to get the overall network state. The simulation results have indicated that the distributed fusion algorithm is efficient enough so that an optimal solution can be reached before the arrival of the next sensor data with a reasonable time interval, and real-time target detection can be achieved. The simulation was performed on a Linux cluster with communication between nodes facilitated by the Message Passing Interface

  19. A Distributed Fiber Optic Sensor Network for Online 3-D Temperature and Neutron Fluence Mapping in a VHTR Environment

    SciTech Connect

    Tsvetkov, Pavel; Dickerson, Bryan; French, Joseph; McEachern, Donald; Ougouag, Abderrafi

    2014-04-30

    Robust sensing technologies allowing for 3D in-core performance monitoring in real time are of paramount importance for already established LWRs to enhance their reliability and availability per year, and therefore, to further facilitate their economic competitiveness via predictive assessment of the in-core conditions.

  20. Optimizing embedded sensor network design for catchment-scale snow-depth estimation using LiDAR and machine learning

    NASA Astrophysics Data System (ADS)

    Oroza, Carlos A.; Zheng, Zeshi; Glaser, Steven D.; Tuia, Devis; Bales, Roger C.

    2016-10-01

    We evaluate the accuracy of a machine-learning algorithm that uses LiDAR data to optimize ground-based sensor placements for catchment-scale snow measurements. Sampling locations that best represent catchment physiographic variables are identified with the Expectation Maximization algorithm for a Gaussian mixture model. A Gaussian process is then used to model the snow depth in a 1 km2 area surrounding the network, and additional sensors are placed to minimize the model uncertainty. The aim of the study is to determine the distribution of sensors that minimizes the bias and RMSE of the model. We compare the accuracy of the snow-depth model using the proposed placements to an existing sensor network at the Southern Sierra Critical Zone Observatory. Each model is validated with a 1 m2 LiDAR-derived snow-depth raster from 14 March 2010. The proposed algorithm exhibits higher accuracy with fewer sensors (8 sensors, RMSE 38.3 cm, bias = 3.49 cm) than the existing network (23 sensors, RMSE 53.0 cm, bias = 15.5 cm) and randomized placements (8 sensors, RMSE 63.7 cm, bias = 24.7 cm). We then evaluate the spatial and temporal transferability of the method using 14 LiDAR scenes from two catchments within the JPL Airborne Snow Observatory. In each region, the optimized sensor placements are determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys is then compared to 100 configurations of sensors selected at random. We find the error statistics (bias and RMSE) to be more consistent across the additional surveys than the average random configuration.

  1. Tropospheric Airborne Meteorological Data Reporting (TAMDAR) Sensor Validation and Verification on National Oceanographic and Atmospheric Administration (NOAA) Lockheed WP-3D Aircraft

    NASA Technical Reports Server (NTRS)

    Tsoucalas, George; Daniels, Taumi S.; Zysko, Jan; Anderson, Mark V.; Mulally, Daniel J.

    2010-01-01

    As part of the National Aeronautics and Space Administration's Aviation Safety and Security Program, the Tropospheric Airborne Meteorological Data Reporting project (TAMDAR) developed a low-cost sensor for aircraft flying in the lower troposphere. This activity was a joint effort with support from Federal Aviation Administration, National Oceanic and Atmospheric Administration, and industry. This paper reports the TAMDAR sensor performance validation and verification, as flown on board NOAA Lockheed WP-3D aircraft. These flight tests were conducted to assess the performance of the TAMDAR sensor for measurements of temperature, relative humidity, and wind parameters. The ultimate goal was to develop a small low-cost sensor, collect useful meteorological data, downlink the data in near real time, and use the data to improve weather forecasts. The envisioned system will initially be used on regional and package carrier aircraft. The ultimate users of the data are National Centers for Environmental Prediction forecast modelers. Other users include air traffic controllers, flight service stations, and airline weather centers. NASA worked with an industry partner to develop the sensor. Prototype sensors were subjected to numerous tests in ground and flight facilities. As a result of these earlier tests, many design improvements were made to the sensor. The results of tests on a final version of the sensor are the subject of this report. The sensor is capable of measuring temperature, relative humidity, pressure, and icing. It can compute pressure altitude, indicated air speed, true air speed, ice presence, wind speed and direction, and eddy dissipation rate. Summary results from the flight test are presented along with corroborative data from aircraft instruments.

  2. Automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners and RFID sensors.

    PubMed

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos

    2012-01-01

    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results.

  3. Automatic Construction of 3D Basic-Semantic Models of Inhabited Interiors Using Laser Scanners and RFID Sensors

    PubMed Central

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos

    2012-01-01

    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results. PMID:22778609

  4. A Microfluidic DNA Sensor Based on Three-Dimensional (3D) Hierarchical MoS2/Carbon Nanotube Nanocomposites

    PubMed Central

    Yang, Dahou; Tayebi, Mahnoush; Huang, Yinxi; Yang, Hui Ying; Ai, Ye

    2016-01-01

    In this work, we present a novel microfluidic biosensor for sensitive fluorescence detection of DNA based on 3D architectural MoS2/multi-walled carbon nanotube (MWCNT) nanocomposites. The proposed platform exhibits a high sensitivity, selectivity, and stability with a visible manner and operation simplicity. The excellent fluorescence quenching stability of a MoS2/MWCNT aqueous solution coupled with microfluidics will greatly simplify experimental steps and reduce time for large-scale DNA detection. PMID:27854247

  5. Lidar Sensor Performance in Closed-Loop Flight Testing of the Morpheus Rocket-Propelled Lander to a Lunar-Like Hazard Field

    NASA Technical Reports Server (NTRS)

    Roback, V. Eric; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Bulyshev, Alexander E.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.

    2015-01-01

    For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging Flash Lidar is a second generation, compact, real-time, aircooled instrument developed from a number of components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The Flash Lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision (1-s). The Flash Lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Navigation Doppler Lidar (NDL) system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The NDLâ€"TM"s measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter (LA), also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the Flash Lidar, can provide range along a separate vector. The LA measurements are also fed

  6. Effect of degree of crosslinking and polymerization of 3D printable polymer/ionic liquid composites on performance of stretchable piezoresistive sensors

    NASA Astrophysics Data System (ADS)

    Lee, Jeongwoo; Faruk Emon, Md Omar; Vatani, Morteza; Choi, Jae-Won

    2017-03-01

    Ionic liquid (IL)/polymer composites (1-ethyl-3-methyl-imidazolium tetrafluoroborate (EMIMBF4)/2-[[(butylamino)carbonyl]oxy]ethyl acrylate (BACOEA)) were fabricated to use as sensing materials for stretchable piezoresistive tactile sensors. The detectability of the IL/polymer composites was enhanced because the ionic transport properties of EMIMBF4 in the composites were improved by the synergic actions between the coordinate sites generated by the local motion of BACOEA chain segments under enough activation energy. The performance of the piezoresistive sensors was investigated with the degree of crosslinking and polymerization of the IL/polymer composites. As the compressive strain was increased, the distance between two electrodes decreased, and the motion of polymer chains and IL occurred, resulting in a decrease in the electrical resistance of the sensors. We have confirmed that the sensitivity of the sensors are affected by the degree of crosslink and polymerization of the IL/polymer composites. In addition, all of the materials (skins, sensing material, and electrode) used in this study are photo-curable, and thus the stretchable piezoresistive tactile sensors can be successfully fabricated by 3D printing.

  7. A 3D CFD Simulation and Analysis of Flow-Induced Forces on Polymer Piezoelectric Sensors in a Chinese Liquors Identification E-Nose

    PubMed Central

    Gu, Yu; Wang, Yang-Fu; Li, Qiang; Liu, Zu-Wu

    2016-01-01

    Chinese liquors can be classified according to their flavor types. Accurate identification of Chinese liquor flavors is not always possible through professional sommeliers’ subjective assessment. A novel polymer piezoelectric sensor electric nose (e-nose) can be applied to distinguish Chinese liquors because of its excellent ability in imitating human senses by using sensor arrays and pattern recognition systems. The sensor, based on the quartz crystal microbalance (QCM) principle is comprised of a quartz piezoelectric crystal plate sandwiched between two specific gas-sensitive polymer coatings. Chinese liquors are identified by obtaining the resonance frequency value changes of each sensor using the e-nose. However, the QCM principle failed to completely account for a particular phenomenon: we found that the resonance frequency values fluctuated in the stable state. For better understanding the phenomenon, a 3D Computational Fluid Dynamics (CFD) simulation using the finite volume method is employed to study the influence of the flow-induced forces to the resonance frequency fluctuation of each sensor in the sensor box. A dedicated procedure was developed for modeling the flow of volatile gas from Chinese liquors in a realistic scenario to give reasonably good results with fair accuracy. The flow-induced forces on the sensors are displayed from the perspective of their spatial-temporal and probability density distributions. To evaluate the influence of the fluctuation of the flow-induced forces on each sensor and ensure the serviceability of the e-nose, the standard deviation of resonance frequency value (SDF) and the standard deviation of resultant forces (SDFy) in y-direction (Fy) are compared. Results show that the fluctuations of Fy are bound up with the resonance frequency values fluctuations. To ensure that the sensor's resonance frequency values are steady and only fluctuate slightly, in order to improve the identification accuracy of Chinese liquors using

  8. A 3D CFD Simulation and Analysis of Flow-Induced Forces on Polymer Piezoelectric Sensors in a Chinese Liquors Identification E-Nose.

    PubMed

    Gu, Yu; Wang, Yang-Fu; Li, Qiang; Liu, Zu-Wu

    2016-10-20

    Chinese liquors can be classified according to their flavor types. Accurate identification of Chinese liquor flavors is not always possible through professional sommeliers' subjective assessment. A novel polymer piezoelectric sensor electric nose (e-nose) can be applied to distinguish Chinese liquors because of its excellent ability in imitating human senses by using sensor arrays and pattern recognition systems. The sensor, based on the quartz crystal microbalance (QCM) principle is comprised of a quartz piezoelectric crystal plate sandwiched between two specific gas-sensitive polymer coatings. Chinese liquors are identified by obtaining the resonance frequency value changes of each sensor using the e-nose. However, the QCM principle failed to completely account for a particular phenomenon: we found that the resonance frequency values fluctuated in the stable state. For better understanding the phenomenon, a 3D Computational Fluid Dynamics (CFD) simulation using the finite volume method is employed to study the influence of the flow-induced forces to the resonance frequency fluctuation of each sensor in the sensor box. A dedicated procedure was developed for modeling the flow of volatile gas from Chinese liquors in a realistic scenario to give reasonably good results with fair accuracy. The flow-induced forces on the sensors are displayed from the perspective of their spatial-temporal and probability density distributions. To evaluate the influence of the fluctuation of the flow-induced forces on each sensor and ensure the serviceability of the e-nose, the standard deviation of resonance frequency value (SDF) and the standard deviation of resultant forces (SDFy) in y-direction (Fy) are compared. Results show that the fluctuations of Fy are bound up with the resonance frequency values fluctuations. To ensure that the sensor's resonance frequency values are steady and only fluctuate slightly, in order to improve the identification accuracy of Chinese liquors using

  9. Filter algorithm for airborne LIDAR data

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ma, Hongchao; Wu, Jianwei; Tian, Liqiao; Qiu, Feng

    2007-11-01

    Airborne laser scanning data has become an accepted data source for highly automated acquisition of digital surface models(DSM) as well as for the generation of digital terrain models(DTM). To generate a high quality DTM using LIDAR data, 3D off-terrain points have to be separated from terrain points. Even though most LIDAR system can measure "last-return" data points, these "last-return" point often measure ground clutter like shrubbery, cars, buildings, and the canopy of dense foliage. Consequently, raw LIDAR points must be post-processed to remove these undesirable returns. The degree to which this post processing is successful is critical in determining whether LIDAR is cost effective for large-scale mapping application. Various techniques have been proposed to extract the ground surface from airborne LIDAR data. The basic problem is the separation of terrain points from off-terrain points which are both recorded by the LIDAR sensor. In this paper a new method, combination of morphological filtering and TIN densification, is proposed to separate 3D off-terrain points.

  10. Identifying High-Traffic Patterns in the Workplace with Radio Tomographic Imaging in 3D Wireless Sensor Networks

    DTIC Science & Technology

    2014-03-27

    monitored. The sensor network used in this research employs a token ring protocol, where each receiver reports respective RSS values to a base station...USAF AFIT-ENG-14-M-24 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION...Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command in Partial Fulfillment

  11. Neutron measurements with ultra-thin 3D silicon sensors in a radiotherapy treatment room using a Siemens PRIMUS linac.

    PubMed

    Guardiola, C; Gómez, F; Fleta, C; Rodríguez, J; Quirion, D; Pellegrini, G; Lousa, A; Martínez-de-Olcoz, L; Pombar, M; Lozano, M

    2013-05-21

    The accurate detection and dosimetry of neutrons in mixed and pulsed radiation fields is a demanding instrumental issue with great interest both for the industrial and medical communities. In recent studies of neutron contamination around medical linacs, there is a growing concern about the secondary cancer risk for radiotherapy patients undergoing treatment in photon modalities at energies greater than 6 MV. In this work we present a promising alternative to standard detectors with an active method to measure neutrons around a medical linac using a novel ultra-thin silicon detector with 3D electrodes adapted for neutron detection. The active volume of this planar device is only 10 µm thick, allowing a high gamma rejection, which is necessary to discriminate the neutron signal in the radiotherapy peripheral radiation field with a high gamma background. Different tests have been performed in a clinical facility using a Siemens PRIMUS linac at 6 and 15 MV. The results show a good thermal neutron detection efficiency around 2% and a high gamma rejection factor.

  12. Neutron measurements with ultra-thin 3D silicon sensors in a radiotherapy treatment room using a Siemens PRIMUS linac

    NASA Astrophysics Data System (ADS)

    Guardiola, C.; Gómez, F.; Fleta, C.; Rodríguez, J.; Quirion, D.; Pellegrini, G.; Lousa, A.; Martínez-de-Olcoz, L.; Pombar, M.; Lozano, M.

    2013-05-01

    The accurate detection and dosimetry of neutrons in mixed and pulsed radiation fields is a demanding instrumental issue with great interest both for the industrial and medical communities. In recent studies of neutron contamination around medical linacs, there is a growing concern about the secondary cancer risk for radiotherapy patients undergoing treatment in photon modalities at energies greater than 6 MV. In this work we present a promising alternative to standard detectors with an active method to measure neutrons around a medical linac using a novel ultra-thin silicon detector with 3D electrodes adapted for neutron detection. The active volume of this planar device is only 10 µm thick, allowing a high gamma rejection, which is necessary to discriminate the neutron signal in the radiotherapy peripheral radiation field with a high gamma background. Different tests have been performed in a clinical facility using a Siemens PRIMUS linac at 6 and 15 MV. The results show a good thermal neutron detection efficiency around 2% and a high gamma rejection factor.

  13. Facile synthesis of novel 3D nanoflower-like CuxO/multilayer graphene composites for room temperature NOx gas sensor application

    NASA Astrophysics Data System (ADS)

    Yang, Ying; Tian, Chungui; Wang, Jingchao; Sun, Li; Shi, Keying; Zhou, Wei; Fu, Honggang

    2014-06-01

    3D nanoflower-like CuxO/multilayer graphene composites (CuMGCs) have been successfully synthesized as a new type of room temperature NOx gas sensor. Firstly, the expanded graphite (EG) was activated by KOH and many moderate functional groups were generated; secondly, Cu(CH3COO)2 and CTAB underwent full infusion into the interlayers of activated EG (aEG) by means of a vacuum-assisted technique and then reacted with the functional groups of aEG accompanied by the exfoliation of aEG via reflux. Eventually, the 3D nanoflower consisting of 5-9 nm CuxO nanoparticles homogeneously grow in situ on aEG. The KOH activation of EG plays a key role in the uniform formation of CuMGCs. When being used as gas sensors for detection of NOx, the CuMGCs achieved a higher response at room temperature than that of the corresponding CuxO. In detail, the CuMGCs show a higher NOx gas sensing performance with low detection limit of 97 ppb, high gas response of 95.1% and short response time of 9.6 s to 97.0 ppm NOx at room temperature. Meanwhile, the CuMGC sensor presents a favorable linearity, good selectivity and stability. The enhancement of the sensing response is mainly attributed to the improved conductivity of the CuMGCs. A series of Mott-Schottky and EIS measurements demonstrated that the CuMGCs have much higher donor densities than CuxO and can easily capture and migrate electrons from the conduction band, resulting in the enhancement of electrical conductivity.3D nanoflower-like CuxO/multilayer graphene composites (CuMGCs) have been successfully synthesized as a new type of room temperature NOx gas sensor. Firstly, the expanded graphite (EG) was activated by KOH and many moderate functional groups were generated; secondly, Cu(CH3COO)2 and CTAB underwent full infusion into the interlayers of activated EG (aEG) by means of a vacuum-assisted technique and then reacted with the functional groups of aEG accompanied by the exfoliation of aEG via reflux. Eventually, the 3D nanoflower

  14. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  15. A comprehensive method for magnetic sensor calibration: a precise system for 3-D tracking of the tongue movements.

    PubMed

    Farajidavar, Aydin; Block, Jacob M; Ghovanloo, Maysam

    2012-01-01

    Magnetic localization has been used in a variety of applications, including the medical field. Small magnetic tracers are often modeled as dipoles and localization has been achieved by solving well-defined dipole equations. However, in practice, the precise calculation of the tracer location not only depends on solving the highly nonlinear dipole equations through numerical algorithms but also on the precision of the magnetic sensor, accuracy of the tracer magnetization, and the earth magnetic field (EMF) measurements. We have developed and implemented a comprehensive calibration method that addresses all of the aforementioned factors. We evaluated this method in a bench-top setting by moving the tracer along controlled trajectories. We also conducted several experiments to track the tongue movement in a human subject.

  16. Tooteko: a Case Study of Augmented Reality for AN Accessible Cultural Heritage. Digitization, 3d Printing and Sensors for AN Audio-Tactile Experience

    NASA Astrophysics Data System (ADS)

    D'Agnano, F.; Balletti, C.; Guerra, F.; Vernier, P.

    2015-02-01

    Tooteko is a smart ring that allows to navigate any 3D surface with your finger tips and get in return an audio content that is relevant in relation to the part of the surface you are touching in that moment. Tooteko can be applied to any tactile surface, object or sheet. However, in a more specific domain, it wants to make traditional art venues accessible to the blind, while providing support to the reading of the work for all through the recovery of the tactile dimension in order to facilitate the experience of contact with art that is not only "under glass." The system is made of three elements: a high-tech ring, a tactile surface tagged with NFC sensors, and an app for tablet or smartphone. The ring detects and reads the NFC tags and, thanks to the Tooteko app, communicates in wireless mode with the smart device. During the tactile navigation of the surface, when the finger reaches a hotspot, the ring identifies the NFC tag and activates, through the app, the audio track that is related to that specific hotspot. Thus a relevant audio content relates to each hotspot. The production process of the tactile surfaces involves scanning, digitization of data and 3D printing. The first experiment was modelled on the facade of the church of San Michele in Isola, made by Mauro Codussi in the late fifteenth century, and which marks the beginning of the Renaissance in Venice. Due to the absence of recent documentation on the church, the Correr Museum asked the Laboratorio di Fotogrammetria to provide it with the aim of setting up an exhibition about the order of the Camaldolesi, owners of the San Michele island and church. The Laboratorio has made the survey of the facade through laser scanning and UAV photogrammetry. The point clouds were the starting point for prototypation and 3D printing on different supports. The idea of the integration between a 3D printed tactile surface and sensors was born as a final thesis project at the Postgraduate Mastercourse in Digital

  17. Turbulent CO2 Flux Measurements by Lidar: Length Scales, Results and Comparison with In-Situ Sensors

    NASA Technical Reports Server (NTRS)

    Gilbert, Fabien; Koch, Grady J.; Beyon, Jeffrey Y.; Hilton, Timothy W.; Davis, Kenneth J.; Andrews, Arlyn; Ismail, Syed; Singh, Upendra N.

    2009-01-01

    The vertical CO2 flux in the atmospheric boundary layer (ABL) is investigated with a Doppler differential absorption lidar (DIAL). The instrument was operated next to the WLEF instrumented tall tower in Park Falls, Wisconsin during three days and nights in June 2007. Profiles of turbulent CO2 mixing ratio and vertical velocity fluctuations are measured by in-situ sensors and Doppler DIAL. Time and space scales of turbulence are precisely defined in the ABL. The eddy-covariance method is applied to calculate turbulent CO2 flux both by lidar and in-situ sensors. We show preliminary mean lidar CO2 flux measurements in the ABL with a time and space resolution of 6 h and 1500 m respectively. The flux instrumental errors decrease linearly with the standard deviation of the CO2 data, as expected. Although turbulent fluctuations of CO2 are negligible with respect to the mean (0.1 %), we show that the eddy-covariance method can provide 2-h, 150-m range resolved CO2 flux estimates as long as the CO2 mixing ratio instrumental error is no greater than 10 ppm and the vertical velocity error is lower than the natural fluctuations over a time resolution of 10 s.

  18. Lidar-equipped uav for building information modelling

    NASA Astrophysics Data System (ADS)

    Roca, D.; Armesto, J.; Lagüela, S.; Díaz-Vilariño, L.

    2014-06-01

    The trend to minimize electronic devices in the last decades accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices, resulting in a strong revolution in the surveying and mapping industries. However, only within the last few years the LIDAR sensor technology has achieved sufficiently reduction in terms of size and weight to be considered for UAV platforms. This paper presents an innovative solution to capture point cloud data from a Lidar-equipped UAV and further perform the 3D modelling of the whole envelope of buildings in BIM format. A mini-UAV platform is used (weigh less than 5 kg and up to 1.5 kg of sensor payload), and data from two different acquisition methodologies is processed and compared with the aim at finding the optimal configuration for the generation of 3D models of buildings for energy studies

  19. Rapid 2-axis scanning lidar prototype

    NASA Astrophysics Data System (ADS)

    Hartsell, Daryl; LaRocque, Paul E.; Tripp, Jeffrey

    2016-10-01

    The rapid 2-axis scanning lidar prototype was developed to demonstrate high-precision single-pixel linear-mode lidar performance. The lidar system is a combined integration of components from various commercial products allowing for future customization and performance enhancements. The intent of the prototype scanner is to demonstrate current stateof- the-art high-speed linear scanning technologies. The system consists of two pieces: the sensor head and control unit. The senor head can be installed up to 4 m from the control box and houses the lidar scanning components and a small RGB camera. The control unit houses the power supplies and ranging electronics necessary for operating the electronics housed inside the sensor head. This paper will discuss the benefits of a 2-axis scanning linear-mode lidar system, such as range performance and a userselectable FOV. Other features include real-time processing of 3D image frames consisting of up to 200,000 points per frame.

  20. High-Fidelity Flash Lidar Model Development

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Pierrottet, Diego F.; Amzajerdian, Farzin

    2014-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technologies (ALHAT) project is currently developing the critical technologies to safely and precisely navigate and land crew, cargo and robotic spacecraft vehicles on and around planetary bodies. One key element of this project is a high-fidelity Flash Lidar sensor that can generate three-dimensional (3-D) images of the planetary surface. These images are processed with hazard detection and avoidance and hazard relative navigation algorithms, and then are subsequently used by the Guidance, Navigation and Control subsystem to generate an optimal navigation solution. A complex, high-fidelity model of the Flash Lidar was developed in order to evaluate the performance of the sensor and its interaction with the interfacing ALHAT components on vehicles with different configurations and under different flight trajectories. The model contains a parameterized, general approach to Flash Lidar detection and reflects physical attributes such as range and electronic noise sources, and laser pulse temporal and spatial profiles. It also provides the realistic interaction of the laser pulse with terrain features that include varying albedo, boulders, craters slopes and shadows. This paper gives a description of the Flash Lidar model and presents results from the Lidar operating under different scenarios.

  1. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  2. Lidar Systems for Precision Navigation and Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierrottet, Diego F.; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.

    2011-01-01

    The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision. Currently, NASA is developing novel lidar sensors aimed at needs of future planetary landing missions. These lidar sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain that indicate hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase of a landing vehicle, at about 1 km above the ground, can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative velocity and distance data allowing for precision navigation to the landing site. Our Doppler lidar utilizes three laser beams pointed to different directions to measure line of sight velocities and ranges to the ground from altitudes of over 2 km. Throughout the landing trajectory starting at altitudes of about 20 km, the Laser Altimeter can provide very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehicle navigation system. At altitudes from approximately 15 km to 10 km, either the Laser Altimeter or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters, to perform Terrain relative Navigation thus further reducing the vehicle s relative position error. This paper describes the operational capabilities of each lidar sensor and provides a status of their development. Keywords: Laser Remote Sensing, Laser Radar, Doppler Lidar, Flash Lidar, 3-D Imaging, Laser Altimeter, Precession Landing, Hazard Detection

  3. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  4. Forest Biomass retrieval strategies from Lidar and Radar modeling

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ranson, J.

    2008-12-01

    Estimates of regional and global forest biomass and forest structure are essential for understanding and monitoring ecosystem responses to human activities and climate change. Lidars with capabilities of recording the time-varying return signals provide vegetation height, ground surface height, and vertical distribution of vegetated surfaces intercepted by laser pulses. Large footprint lidar has been shown to be an effective technique for measuring forest canopy height, and biomass from space. Essentially, radar responds to the amount of water in a forest canopy, as well as its spatial structure. Data from these sensors contain information relevant to different aspects of the biophysical properties of the vegetation canopy including above ground biomass. The planned NASA new mission DESDynI will provide global systematic lidar sampling data and complete global coverage of L-band high resolution SAR and InSAR data for vegetation 3D structure mapping. By combining lidar and high-resolution SAR data, our quantitative knowledge of global carbon dynamics and ecosystem structure and function can be improved. This requires some new data processing and fusion technologies. What is the proper lidar sampling design and how to expand the vegetation spatial structural parameters estimated at lidar footprints to global spatial coverage in high resolution need to be resolved. Current configuration of DESDynI may also require lidar observations with variable looking angles, which creates a new challenge in lidar data processing. Models designed to simulate lidar and radar response from a variety of forest canopies can help answer these questions. In this paper we present an overview of our spatially explicit lidar and radar models and their use for examining the questions above. Specifically we will discuss sensitivities of large-footprint lidar and L-band polarimetric and interferometric radar to forest

  5. A comparison of Doppler lidar wind sensors for Earth-orbit global measurement applications

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.

    1985-01-01

    Now, there are four Doppler lidar configurations which are being promoted for the measurement of tropospheric winds: (1) the coherent CO2 Lidar, operating in the 9 micrometer region using a pulsed, atmospheric pressure CO2 gas discharge laser transmitter, and heterodyne detection; (2) the coherent Neodymium doped YAG or Glass Lidar, operating at 1.06 micrometers, using flashlamp or diode laser optical pumping of the solid state laser medium, and heterodyne detection; (3) the Neodymium doped YAG/Glass Lidar, operating at the doubled frequency (at 530 nm wavelength), again using flashlamp or diode laser pumping of the laser transmitter, and using a high resolution tandem Fabry-Perot filter and direct detection; and (4) the Raman shifted Xenon Chloride Lidar, operating at 350 nm wavelength, using a pulsed, atmospheric pressure XeCl gas discharge laser transmitter at 308 nm, Raman shifted in a high pressure hydrogen cell to 350 nm in order to avoid strong stratospheric ozone absorption, also using a high resolution tandem Fabry-Perot filter and direct detection. Comparisons of these four systems can include many factors and tradeoffs. The major portion of this comparison is devoted to efficiency. Efficiency comparisons are made by estimating the number of transmitted photons required for a single pulse wind velocity estimate of + or - 1 m/s accuracy in the middle troposphere, from an altitude of 800 km, which is assured to be reasonable for a polar orbiting platform.

  6. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  7. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  8. Lidar base specification

    USGS Publications Warehouse

    Heidemann, Hans Karl.

    2012-01-01

    In late 2009, a $14.3 million allocation from the “American Recovery and Reinvestment Act” for new light detection and ranging (lidar) elevation data prompted the U.S. Geological Survey (USGS) National Geospatial Program (NGP) to develop a common base specification for all lidar data acquired for The National Map. Released as a draft in 2010 and formally published in 2012, the USGS–NGP “Lidar Base Specification Version 1.0” (now Lidar Base Specification) was quickly embraced as the foundation for numerous state, county, and foreign country lidar specifications. Prompted by a growing appreciation for the wide applicability and inherent value of lidar, a USGS-led consortium of Federal agencies commissioned a National Enhanced Elevation Assessment (NEEA) study in 2010 to quantify the costs and benefits of a national lidar program. A 2012 NEEA report documented a substantial return on such an investment, defined five Quality Levels (QL) for elevation data, and recommended an 8-year collection cycle of Quality Level 2 (QL2) lidar data as the optimum balance of benefit and affordability. In response to the study, the USGS–NGP established the 3D Elevation Program (3DEP) in 2013 as the interagency vehicle through which the NEEA recommendations could be realized. Lidar is a fast evolving technology, and much has changed in the industry since the final draft of the “Lidar Base Specification Version 1.0” was written. Lidar data have improved in accuracy and spatial resolution, geospatial accuracy standards have been revised by the American Society for Photogrammetry and Remote Sensing (ASPRS), industry standard file formats have been expanded, additional applications for lidar have become accepted, and the need for interoperable data across collections has been realized. This revision to the “Lidar Base Specification Version 1.0” publication addresses those changes and provides continued guidance towards a nationally consistent lidar dataset.

  9. Space-Based Erbium-Doped Fiber Amplifier Transmitters for Coherent, Ranging, 3D-Imaging, Altimetry, Topology, and Carbon Dioxide Lidar and Earth and Planetary Optical Laser Communications

    NASA Astrophysics Data System (ADS)

    Storm, Mark; Engin, Doruk; Mathason, Brian; Utano, Rich; Gupta, Shantanu

    2016-06-01

    This paper describes Fibertek, Inc.'s progress in developing space-qualified Erbium-doped fiber amplifier (EDFA) transmitters for laser communications and ranging/topology, and CO2 integrated path differential absorption (IPDA) lidar. High peak power (1 kW) and 6 W of average power supporting multiple communications formats has been demonstrated with 17% efficiency in a compact 3 kg package. The unit has been tested to Technology Readiness Level (TRL) 6 standards. A 20 W EDFA suitable for CO2 lidar has been demonstrated with ~14% efficiency (electrical to optical [e-o]) and its performance optimized for 1571 nm operation.

  10. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  11. Optimization of Sensor Placements Using Machine Learning and LIDAR data: a Case Study for a Snow Monitoring Network in the Sierra Nevada.

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2014-12-01

    We present a methodology for the identification of optimal sensor placements and wireless network structure of remote wireless sensor networks. When applied to an existing snow observation network, our results suggest that greater spatial variability and more optimal networks structures could be achieved compared to existing placements. For sensor networks designed to measure spatially distributed phenomena, it is best to choose sites that capture the full range of variables explaining the underlying spatial distribution. In the context of snow depth estimation, topographical variables affecting the spatial distribution include elevation, slope, aspect, vegetation, and concavity. To extract this set of feature vectors, data is obtained from the NSF Open Topography platform, which uses LIDAR flights with 11.65 points per square meter to produce a one-meter raster for the DEM and surface models. Slope and aspect are calculated with the convolution of the elevation matrix and the Sobel operator and the vegetation layer is estimated from a two-meter height filter on the canopy height model. Two types of terrain concavity are calculated from the DEM raster: profile (parallel to the direction of maximum slope), and planform (perpendicular to the direction of maximum slope). Once this feature space is extracted from the LIDAR data, sensor placements can be found using K-means clustering. We use a normalized feature space (in which all feature vectors are scaled from zero to one, thereby evenly weighting each variable). The number of sensors, K, to be placed is taken as an input to the algorithm, which evenly partitions the data into K Voronoi cells, thereby evenly spreading the sensor locations through the space of observed variables. For regions that do not have LIDAR data, we present a methodology that uses a support vector machine algorithm with user-generated training and cross-validation points to classify vegetation from satellite imagery, and compare its accuracy

  12. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  13. 3D indoor modeling using a hand-held embedded system with multiple laser range scanners

    NASA Astrophysics Data System (ADS)

    Hu, Shaoxing; Wang, Duhu; Xu, Shike

    2016-10-01

    Accurate three-dimensional perception is a key technology for many engineering applications, including mobile mapping, obstacle detection and virtual reality. In this article, we present a hand-held embedded system designed for constructing 3D representation of structured indoor environments. Different from traditional vehicle-borne mobile mapping methods, the system presented here is capable of efficiently acquiring 3D data while an operator carrying the device traverses through the site. It consists of a simultaneous localization and mapping(SLAM) module, a 3D attitude estimate module and a point cloud processing module. The SLAM is based on a scan matching approach using a modern LIDAR system, and the 3D attitude estimate is generated by a navigation filter using inertial sensors. The hardware comprises three 2D time-flight laser range finders and an inertial measurement unit(IMU). All the sensors are rigidly mounted on a body frame. The algorithms are developed on the frame of robot operating system(ROS). The 3D model is constructed using the point cloud library(PCL). Multiple datasets have shown robust performance of the presented system in indoor scenarios.

  14. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size. PMID:25961382

  15. A Fuzzy-Based Fusion Method of Multimodal Sensor-Based Measurements for the Quantitative Evaluation of Eye Fatigue on 3D Displays.

    PubMed

    Bang, Jae Won; Choi, Jong-Suk; Heo, Hwan; Park, Kang Ryoung

    2015-05-07

    With the rapid increase of 3-dimensional (3D) content, considerable research related to the 3D human factor has been undertaken for quantitatively evaluating visual discomfort, including eye fatigue and dizziness, caused by viewing 3D content. Various modalities such as electroencephalograms (EEGs), biomedical signals, and eye responses have been investigated. However, the majority of the previous research has analyzed each modality separately to measure user eye fatigue. This cannot guarantee the credibility of the resulting eye fatigue evaluations. Therefore, we propose a new method for quantitatively evaluating eye fatigue related to 3D content by combining multimodal measurements. This research is novel for the following four reasons: first, for the evaluation of eye fatigue with high credibility on 3D displays, a fuzzy-based fusion method (FBFM) is proposed based on the multimodalities of EEG signals, eye blinking rate (BR), facial temperature (FT), and subjective evaluation (SE); second, to measure a more accurate variation of eye fatigue (before and after watching a 3D display), we obtain the quality scores of EEG signals, eye BR, FT and SE; third, for combining the values of the four modalities we obtain the optimal weights of the EEG signals BR, FT and SE using a fuzzy system based on quality scores; fourth, the quantitative level of the variation of eye fatigue is finally obtained using the weighted sum of the values measured by the four modalities. Experimental results confirm that the effectiveness of the proposed FBFM is greater than other conventional multimodal measurements. Moreover, the credibility of the variations of the eye fatigue using the FBFM before and after watching the 3D display is proven using a t-test and descriptive statistical analysis using effect size.

  16. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  17. An underwater chaotic lidar sensor based on synchronized blue laser diodes

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Luke K.; Dunn, Kaitlin J.; Bollt, Erik M.; Cochenour, Brandon; Jemison, William D.

    2016-05-01

    We present a novel chaotic lidar system designed for underwater impulse response measurements. The system uses two recently introduced, low-cost, commercially available 462 nm multimode InGaN laser diodes, which are synchronized by a bi-directional optical link. This synchronization results in a noise-like chaotic intensity modulation with over 1 GHz bandwidth and strong modulation depth. An advantage of this approach is its simple transmitter architecture, which uses no electrical signal generator, electro-optic modulator, or optical frequency doubler.

  18. Laser sources for lidar applications

    NASA Astrophysics Data System (ADS)

    Kilmer, J.; Iadevaia, A.; Yin, Y.

    2012-06-01

    Advanced LIDAR applications such as next gen: Micro Pulse; Time of Flight (e.g., Satellite Laser Ranging); Coherent and Incoherent Doppler (e.g., Wind LIDAR); High Spectral Resolution; Differential Absorption (DIAL); photon counting LIDAR (e.g., 3D LIDAR); are placing more demanding requirements on conventional lasers (e.g., increased rep rates, etc.) and have inspired the development of new types of laser sources. Today, solid state lasers are used for wind sensing, 2D laser Radar, 3D scanning and flash LIDAR. In this paper, we report on the development of compact, highly efficient, high power all-solidstate diode pulsed pumped ns lasers, as well as, high average power/high pulse energy sub nanosecond (<1ns) and picosecond (<100ps) lasers for these next gen LIDAR applications.

  19. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  20. Validate and update of 3D urban features using multi-source fusion

    NASA Astrophysics Data System (ADS)

    Arrington, Marcus; Edwards, Dan; Sengers, Arjan

    2012-06-01

    As forecast by the United Nations in May 2007, the population of the world transitioned from a rural to an urban demographic majority with more than half living in urban areas.1 Modern urban environments are complex 3- dimensional (3D) landscapes with 4-dimensional patterns of activity that challenge various traditional 1-dimensional and 2-dimensional sensors to accurately sample these man-made terrains. Depending on geographic location, data resulting from LIDAR, multi-spectral, electro-optical, thermal, ground-based static and mobile sensors may be available with multiple collects along with more traditional 2D GIS features. Reconciling differing data sources over time to correctly portray the dynamic urban landscape raises significant fusion and representational challenges particularly as higher levels of spatial resolution are available and expected by users. This paper presents a framework for integrating the imperfect answers of our differing sensors and data sources into a powerful representation of the complex urban environment. A case study is presented involving the integration of temporally diverse 2D, 2.5D and 3D spatial data sources over Kandahar, Afghanistan. In this case study we present a methodology for validating and augmenting 2D/2.5D urban feature and attribute data with LIDAR to produce validated 3D objects. We demonstrate that nearly 15% of buildings in Kandahar require understanding nearby vegetation before 3-D validation can be successful. We also address urban temporal change detection at the object level. Finally we address issues involved with increased sampling resolution since urban features are rarely simple cubes but in the case of Kandahar involve balconies, TV dishes, rooftop walls, small rooms, and domes among other things.

  1. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  2. The Windvan pulsed CO2 Doppler lidar wide-area wind sensor

    NASA Technical Reports Server (NTRS)

    Lawrence, Rhidian

    1990-01-01

    Wind sensing using a Doppler lidar is achieved by sensing the Doppler content of narrow frequency laser light backscattered by the ambient atmospheric aerosols. The derived radial wind components along several directions are used to generate wind vectors, typically using the Velocity Azimuth Display (VAD) method described below. Range resolved information is obtained by range gating the continuous scattered return. For a CO2 laser (10.6 mu) the Doppler velocity scaling factor is 188 kHz/ms(exp -1). In the VAD scan method the zenith angle of the pointing direction is fixed and its azimuth is continuously varied through 2 pi. A spatially uniform wind field at a particular altitude yields a sinusoidal variation of the radial component vs. azimuth. The amplitude, phase and dc component of this sinusoid yield the horizontal wind speed, direction and vertical component of the wind respectively. In a nonuniform wind field the Fourier components of the variation yields the required information.

  3. Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors

    PubMed Central

    Alhashimi, Anas; Varagnolo, Damiano; Gustafsson, Thomas

    2015-01-01

    We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser’s datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three. PMID:26690445

  4. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  5. Counter-sniper 3D laser radar

    NASA Astrophysics Data System (ADS)

    Shepherd, Orr; LePage, Andrew J.; Wijntjes, Geert J.; Zehnpfennig, Theodore F.; Sackos, John T.; Nellums, Robert O.

    1999-01-01

    Visidyne, Inc., teaming with Sandia National Laboratories, has developed the preliminary design for an innovative scannerless 3-D laser radar capable of acquiring, tracking, and determining the coordinates of small caliber projectiles in flight with sufficient precision, so their origin can be established by back projecting their tracks to their source. The design takes advantage of the relatively large effective cross-section of a bullet at optical wavelengths. Kay to its implementation is the use of efficient, high- power laser diode arrays for illuminators and an imaging laser receiver using a unique CCD imager design, that acquires the information to establish x, y (angle-angle) and range coordinates for each bullet at very high frame rates. The detection process achieves a high degree of discrimination by using the optical signature of the bullet, solar background mitigation, and track detection. Field measurements and computer simulations have been used to provide the basis for a preliminary design of a robust bullet tracker, the Counter Sniper 3-D Laser Radar. Experimental data showing 3-D test imagery acquired by a lidar with architecture similar to that of the proposed Counter Sniper 3-D Lidar are presented. A proposed Phase II development would yield an innovative, compact, and highly efficient bullet-tracking laser radar. Such a device would meet the needs of not only the military, but also federal, state, and local law enforcement organizations.

  6. Lidar-based Evaluation of Sub-pixel Forest Structural Characteristics and Sun-sensor Geometries that Influence MODIS Leaf Area Index Product Accuracy and Retrieval Quality

    NASA Astrophysics Data System (ADS)

    Jensen, J.; Humes, K. S.

    2010-12-01

    Leaf Area Index (LAI) is an important structural component of vegetation because the foliar surface of plants largely controls the exchange of water, nutrients, and energy within terrestrial ecosystems. Because LAI is a key variable used to model water, energy, and biogeochemical cycles, Moderate Resolution Imaging Spectroradiometer (MODIS) LAI products are widely used in many studies to better understand and quantify exchanges between the terrestrial surface and the atmosphere. Within the last decade, significant resources and efforts have been invested toward MODIS LAI validation for a variety of biome types and a suite of published work has provided valuable feedback on the agreement between MODIS-derived LAI via radiative transfer (RT) inversion compared to multispectral-based empirical estimates of LAI. Our study provides an alternative assessment of the MODIS LAI product for a 58,000 ha evergreen needleleaf forest located in the western Rocky Mountain range in northern Idaho by using lidar data to model (R2=0.86, RMSE=0.76) and map fine-scale estimates of vegetation structure over a region for which multispectral LAI estimates were unacceptable. In an effort to provide feedback on algorithm performance, we evaluated the agreement between lidar-modeled and MODIS-retrieved LAI by specific MODIS LAI retrieval algorithm and product quality definitions. We also examined the sub-pixel vegetation structural conditions and satellite-sensor geometries that tend to influence MODIS LAI retrieval algorithm and product quality over our study area. Our results demonstrate a close agreement between lidar LAI and MODIS LAI retrieved using the main RT algorithm and consistently large MODIS LAI overestimates for pixels retrieved from a saturated set of RT solutions. Our evaluation also illuminated some conditions for which sub-pixel structural characteristics and sun-sensor geometries influenced retrieval quality and product agreement. These conditions include: 1) the

  7. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  8. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  9. Automatic registration of UAV-borne sequent images and LiDAR data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Chen, Chi

    2015-03-01

    Use of direct geo-referencing data leads to registration failure between sequent images and LiDAR data captured by mini-UAV platforms because of low-cost sensors. This paper therefore proposes a novel automatic registration method for sequent images and LiDAR data captured by mini-UAVs. First, the proposed method extracts building outlines from LiDAR data and images and estimates the exterior orientation parameters (EoPs) of the images with building objects in the LiDAR data coordinate framework based on corresponding corner points derived indirectly by using linear features. Second, the EoPs of the sequent images in the image coordinate framework are recovered using a structure from motion (SfM) technique, and the transformation matrices between the LiDAR coordinate and image coordinate frameworks are calculated using corresponding EoPs, resulting in a coarse registration between the images and the LiDAR data. Finally, 3D points are generated from sequent images by multi-view stereo (MVS) algorithms. Then the EoPs of the sequent images are further refined by registering the LiDAR data and the 3D points using an iterative closest-point (ICP) algorithm with the initial results from coarse registration, resulting in a fine registration between sequent images and LiDAR data. Experiments were performed to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision robust co-registration of sequent images and LiDAR data captured by mini-UAVs.

  10. a Multi-Data Source and Multi-Sensor Approach for the 3d Reconstruction and Visualization of a Complex Archaelogical Site: the Case Study of Tolmo de Minateda

    NASA Astrophysics Data System (ADS)

    Torres-Martínez, J. A.; Seddaiu, M.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; González-Aguilera, D.

    2015-02-01

    The complexity of archaeological sites hinders to get an integral modelling using the actual Geomatic techniques (i.e. aerial, closerange photogrammetry and terrestrial laser scanner) individually, so a multi-sensor approach is proposed as the best solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial dataset must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. Last but not least, safeguarding of tangible archaeological heritage and its associated intangible expressions entails a multi-source data approach in which heterogeneous material (historical documents, drawings, archaeological techniques, habit of living, etc.) should be collected and combined with the resulting hybrid 3D of "Tolmo de Minateda" located models. The proposed multi-data source and multi-sensor approach is applied to the study case of "Tolmo de Minateda" archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. In addition, the own defensive nature of the site (i.e. with the presence of three different defensive walls) together with the considerable stratification of the archaeological site (i.e. with different archaeological surfaces and constructive typologies) require that tangible and intangible archaeological heritage expressions can be integrated with the hybrid 3D models obtained, to analyse, understand and exploit the archaeological site by different experts and heritage stakeholders.

  11. Highly-Sensitive Surface-Enhanced Raman Spectroscopy (SERS)-based Chemical Sensor using 3D Graphene Foam Decorated with Silver Nanoparticles as SERS substrate

    PubMed Central

    Srichan, Chavis; Ekpanyapong, Mongkol; Horprathum, Mati; Eiamchai, Pitak; Nuntawong, Noppadon; Phokharatkul, Ditsayut; Danvirutai, Pobporn; Bohez, Erik; Wisitsoraat, Anurat; Tuantranont, Adisorn

    2016-01-01

    In this work, a novel platform for surface-enhanced Raman spectroscopy (SERS)-based chemical sensors utilizing three-dimensional microporous graphene foam (GF) decorated with silver nanoparticles (AgNPs) is developed and applied for methylene blue (MB) detection. The results demonstrate that silver nanoparticles significantly enhance cascaded amplification of SERS effect on multilayer graphene foam (GF). The enhancement factor of AgNPs/GF sensor is found to be four orders of magnitude larger than that of AgNPs/Si substrate. In addition, the sensitivity of the sensor could be tuned by controlling the size of silver nanoparticles. The highest SERS enhancement factor of ∼5 × 104 is achieved at the optimal nanoparticle size of 50 nm. Moreover, the sensor is capable of detecting MB over broad concentration ranges from 1 nM to 100 μM. Therefore, AgNPs/GF is a highly promising SERS substrate for detection of chemical substances with ultra-low concentrations. PMID:27020705

  12. LiDAR: Providing structure

    USGS Publications Warehouse

    Vierling, Lee A.; Martinuzzi, Sebastián; Asner, Gregory P.; Stoker, Jason M.; Johnson, Brian R.

    2011-01-01

    Since the days of MacArthur, three-dimensional (3-D) structural information on the environment has fundamentally transformed scientific understanding of ecological phenomena (MacArthur and MacArthur 1961). Early data on ecosystem structure were painstakingly laborious to collect. However, as reviewed and reported in recent volumes of Frontiers(eg Vierling et al. 2008; Asner et al.2011), advances in light detection and ranging (LiDAR) remote-sensing technology provide quantitative and repeatable measurements of 3-D ecosystem structure that enable novel ecological insights at scales ranging from the plot, to the landscape, to the globe. Indeed, annual publication of studies using LiDAR to interpret ecological phenomena increased 17-fold during the past decade, with over 180 new studies appearing in 2010 (ISI Web of Science search conducted on 23 Mar 2011: [{lidar AND ecol*} OR {lidar AND fores*} OR {lidar AND plant*}]).

  13. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  14. Imaging Flash Lidar for Autonomous Safe Landing and Spacecraft Proximity Operation

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Brewster, Paul F.; Hines, Glenn D.; Bulyshev, Alexander E.

    2016-01-01

    3-D Imaging flash lidar is recognized as a primary candidate sensor for safe precision landing on solar system bodies (Moon, Mars, Jupiter and Saturn moons, etc.), and autonomous rendezvous proximity operations and docking/capture necessary for asteroid sample return and redirect missions, spacecraft docking, satellite servicing, and space debris removal. During the final stages of landing, from about 1 km to 500 m above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard fli1ght computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station from several kilometers distance. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16k pixels range images with 7 cm precision, at a 20 Hz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument design and capabilities as demonstrated by the closed-loop flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus). Then a plan for continued advancement of the flash lidar technology will be explained. This proposed plan is aimed at the development of a common sensor that with a modest design adjustment can meet the needs of both landing and proximity operation and docking applications.

  15. High-performance, mechanically flexible, and vertically integrated 3D carbon nanotube and InGaZnO complementary circuits with a temperature sensor.

    PubMed

    Honda, Wataru; Harada, Shingo; Ishida, Shohei; Arie, Takayuki; Akita, Seiji; Takei, Kuniharu

    2015-08-26

    A vertically integrated inorganic-based flexible complementary metal-oxide-semiconductor (CMOS) inverter with a temperature sensor with a high inverter gain of ≈50 and a low power consumption of <7 nW mm(-1) is demonstrated using a layer-by-layer assembly process. In addition, the negligible influence of the mechanical flexibility on the performance of the CMOS inverter and the temperature dependence of the CMOS inverter characteristics are discussed.

  16. Imaging Flash Lidar for Safe Landing on Solar System Bodies and Spacecraft Rendezvous and Docking

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Bulyshev, Alexander E.; Brewster, Paul F.; Carrion, William A; Pierrottet, Diego F.; Hines, Glenn D.; Petway, Larry B.; Barnes, Bruce W.; Noe, Anna M.

    2015-01-01

    NASA has been pursuing flash lidar technology for autonomous, safe landing on solar system bodies and for automated rendezvous and docking. During the final stages of the landing from about 1 kilometer to 500 meters above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard flight computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16,000 pixels range images with 7 centimeters precision, at 20 Hertz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument and presents the results of recent flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus) built by NASA Johnson Space Center. The flights were conducted at a simulated lunar terrain site, consisting of realistic hazard features and designated landing areas, built at NASA Kennedy Space Center specifically for this demonstration test. This paper also provides an overview of the plan for continued advancement of the flash lidar technology aimed at enhancing its performance to meet both landing and automated rendezvous and docking applications.

  17. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  18. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  19. Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.

    PubMed

    Thali, Michael J; Braun, Marcel; Dirnhofer, Richard

    2003-11-26

    Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.

  20. Wind Field Measurements With Airborne Doppler Lidar

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.

    1999-01-01

    In collaboration with lidar atmospheric remote sensing groups at NASA Marshall Space Flight Center and National Oceanic and Atmospheric Administration (NOAA) Environmental Technology Laboratory, we have developed and flown the Multi-center Airborne Coherent Atmospheric Wind Sensor (MACAWS) lidar on the NASA DC-8 research aircraft. The scientific motivations for this effort are: to obtain measurements of subgrid scale (i.e. 2-200 km) processes and features which may be used to improve parameterizations in global/regional-scale models; to improve understanding and predictive capabilities on the mesoscale; and to assess the performance of Earth-orbiting Doppler lidar for global tropospheric wind measurements. MACAWS is a scanning Doppler lidar using a pulsed transmitter and coherent detection; the use of the scanner allows 3-D wind fields to be produced from the data. The instrument can also be radiometrically calibrated and used to study aerosol, cloud, and surface scattering characteristics at the lidar wavelength in the thermal infrared. MACAWS was used to study surface winds off the California coast near Point Arena, with an example depicted in the figure below. The northerly flow here is due to the Pacific subtropical high. The coastal topography interacts with the northerly flow in the marine inversion layer, and when the flow passes a cape or point that juts into the winds, structures called "hydraulic expansion fans" are observed. These are marked by strong variation along the vertical and cross-shore directions. The plots below show three horizontal slices at different heights above sea level (ASL). Bottom plots are enlargements of the area marked by dotted boxes above. The terrain contours are in 200-m increments, with the white spots being above 600-m elevation. Additional information is contained in the original.

  1. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  2. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  3. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  4. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  5. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  6. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  7. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  8. Reconstruction of 3D tree stem models from low-cost terrestrial laser scanner data

    NASA Astrophysics Data System (ADS)

    Kelbe, Dave; Romanczyk, Paul; van Aardt, Jan; Cawse-Nicholson, Kerry

    2013-05-01

    With the development of increasingly advanced airborne sensing systems, there is a growing need to support sensor system design, modeling, and product-algorithm development with explicit 3D structural ground truth commensurate to the scale of acquisition. Terrestrial laser scanning is one such technique which could provide this structural information. Commercial instrumentation to suit this purpose has existed for some time now, but cost can be a prohibitive barrier for some applications. As such we recently developed a unique laser scanning system from readily-available components, supporting low cost, highly portable, and rapid measurement of below-canopy 3D forest structure. Tools were developed to automatically reconstruct tree stem models as an initial step towards virtual forest scene generation. The objective of this paper is to assess the potential of this hardware/algorithm suite to reconstruct 3D stem information for a single scan of a New England hardwood forest site. Detailed tree stem structure (e.g., taper, sweep, and lean) is recovered for trees of varying diameter, species, and range from the sensor. Absolute stem diameter retrieval accuracy is 12.5%, with a 4.5% overestimation bias likely due to the LiDAR beam divergence.

  9. HATS (High Altitude Thermal Sounder): a passive sensor solution to 3D high-resolution mapping of upper atmosphere dynamics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gordley, Larry; Marshall, Benjamin T.; Lachance, Richard L.

    2016-10-01

    This presentation introduces a High Altitude Thermal Sensor (HATS) that has the potential to resolve the thermal structure of the upper atmosphere (cloud top to 100km) with both horizontal and vertical resolution of 5-7 km or better. This would allow the complete characterization of the wave structures that carry weather signature from the underlying atmosphere. Using a novel gas correlation technique, an extremely high-resolution spectral scan is accomplished by measuring a Doppler modulated signal as the atmospheric thermal scene passes through the HATS 2D FOV. This high spectral resolution, difficult to impossible to achieve with any other passive technique, enables the separation of radiation emanating at high altitudes from that emanating at low altitudes. A principal component analysis of these modulation signals then exposes the complete thermal structure of the upper atmosphere. We show that nadir sounding from low earth orbit, using various branches of CO2 emission in the 17 to 15 micron region, with sufficient spectral resolution and spectral measurement range, can distinguish thermal energy that peaks at various altitudes. By observing the up-welling atmospheric emission through a low pressure (Doppler broadened) gas cell, as the scene passes through our FOV, a modulation signal is created as the atmospheric emission lines are shifted through the spectral position of the gas cell absorption lines. The modulation signal is shown to be highly correlated to the emission coming from the spectral location of the gas cell lines relative to the atmospheric emission lines. This effectively produces a scan of the atmospheric emission with a Doppler line resolution. Similar to thermal sounding of the troposphere, a principal component analysis of the modulation signal can be used to produce an altitude resolved profile, given a reasonable a priori temperature profile. It is then shown that with the addition of a limb observation with one CO2 broadband channel

  10. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  11. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The Use of a Lidar Forward-Looking Turbulence Sensor for Mixed-Compression Inlet Unstart Avoidance and Gross Weight Reduction on a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Soreide, David; Bogue, Rodney K.; Ehernberger, L. J.; Seidel, Jonathan

    1997-01-01

    Inlet unstart causes a disturbance akin to severe turbulence for a supersonic commercial airplane. Consequently, the current goal for the frequency of unstarts is a few times per fleet lifetime. For a mixed-compression inlet, there is a tradeoff between propulsion system efficiency and unstart margin. As the unstart margin decreases, propulsion system efficiency increases, but so does the unstart rate. This paper intends to first, quantify that tradeoff for the High Speed Civil Transport (HSCT) and second, to examine the benefits of using a sensor to detect turbulence ahead of the airplane. When the presence of turbulence is known with sufficient lead time to allow the propulsion system to adjust the unstart margin, then inlet un,starts can be minimized while overall efficiency is maximized. The NASA Airborne Coherent Lidar for Advanced In-Flight Measurements program is developing a lidar system to serve as a prototype of the forward-looking sensor. This paper reports on the progress of this development program and its application to the prevention of inlet unstart in a mixed-compression supersonic inlet. Quantified benefits include significantly reduced takeoff gross weight (TOGW), which could increase payload, reduce direct operating costs, or increase range for the HSCT.

  13. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

    PubMed Central

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-01-01

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855

  14. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.

    PubMed

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-12-31

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.

  15. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  16. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  17. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  18. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  19. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  20. 3-D capaciflector

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1998-01-01

    A capacitive type proximity sensor having improved range and sensitivity between a surface of arbitrary shape and an intruding object in the vicinity of the surface having one or more outer conductors on the surface which serve as capacitive sensing elements shaped to conform to the underlying surface of a machine. Each sensing element is backed by a reflector driven at the same voltage and in phase with the corresponding capacitive sensing element. Each reflector, in turn, serves to reflect the electric field lines of the capacitive sensing element away from the surface of the machine on which the sensor is mounted so as to enhance the component constituted by the capacitance between the sensing element and an intruding object as a fraction of the total capacitance between the sensing element and ground. Each sensing element and corresponding reflecting element are electrically driven in phase, and the capacitance between the sensing elements individually and the sensed object is determined using circuitry known to the art. The reflector may be shaped to shield the sensor and to shape its field of view, in effect providing an electrostatic lensing effect. Sensors and reflectors may be fabricated using a variety of known techniques such as vapor deposition, sputtering, painting, plating, or deformation of flexible films, to provide conformal coverage of surfaces of arbitrary shape.

  1. 3-D MAPPING TECHNOLOGIES FOR HIGH LEVEL WASTE TANKS

    SciTech Connect

    Marzolf, A.; Folsom, M.

    2010-08-31

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  2. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  3. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Diode laser lidar wind velocity sensor using a liquid-crystal retarder for non-mechanical beam-steering.

    PubMed

    Rodrigo, Peter John; Iversen, Theis F Q; Hu, Qi; Pedersen, Christian

    2014-11-03

    We extend the functionality of a low-cost CW diode laser coherent lidar from radial wind speed (scalar) sensing to wind velocity (vector) measurements. Both speed and horizontal direction of the wind at ~80 m remote distance are derived from two successive radial speed estimates by alternately steering the lidar probe beam in two different lines-of-sight (LOS) with a 60° angular separation. Dual-LOS beam-steering is implemented optically with no moving parts by means of a controllable liquid-crystal retarder (LCR). The LCR switches the polarization between two orthogonal linear states of the lidar beam so it either transmits through or reflects off a polarization splitter. The room-temperature switching time between the two LOS is measured to be in the order of 100 μs in one switch direction but 16 ms in the opposite transition. Radial wind speed measurement (at 33 Hz rate) while the lidar beam is repeatedly steered from one LOS to the other every half a second is experimentally demonstrated - resulting in 1 Hz rate estimates of wind velocity magnitude and direction at better than 0.1 m/s and 1° resolution, respectively.

  5. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  6. In situ correlative measurements for the ultraviolet differential absorption lidar and the high spectral resolution lidar air quality remote sensors: 1980 PEPE/NEROS program

    NASA Technical Reports Server (NTRS)

    Gregory, G. L.; Beck, S. M.; Mathis, J. J., Jr.

    1981-01-01

    In situ correlative measurements were obtained with a NASA aircraft in support of two NASA airborne remote sensors participating in the Environmental Protection Agency's 1980persistent elevated pollution episode (PEPE) and Northeast regional oxidant study (NEROS) field program in order to provide data for evaluating the capability of two remote sensors for measuring mixing layer height, and ozone and aerosol concentrations in the troposphere during the 1980 PEPE/NEROS program. The in situ aircraft was instrumented to measure temperature, dewpoint temperature, ozone concentrations, and light scattering coefficient. In situ measurements for ten correlative missions are given and discussed. Each data set is presented in graphical and tabular format aircraft flight plans are included.

  7. Oceanic Lidar

    NASA Technical Reports Server (NTRS)

    Carder, K. L. (Editor)

    1981-01-01

    Instrument concepts which measure ocean temperature, chlorophyll, sediment and Gelbstoffe concentrations in three dimensions on a quantitative, quasi-synoptic basis were considered. Coastal zone color scanner chlorophyll imagery, laser stimulated Raman temperaure and fluorescence spectroscopy, existing airborne Lidar and laser fluorosensing instruments, and their accuracies in quantifying concentrations of chlorophyll, suspended sediments and Gelbstoffe are presented. Lidar applications to phytoplankton dynamics and photochemistry, Lidar radiative transfer and signal interpretation, and Lidar technology are discussed.

  8. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  9. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters

  10. Studying the Impact of the Three Dimensional Canopy Structure on LIDAR Waveforms Evaluated with Field Measurements

    NASA Astrophysics Data System (ADS)

    Xu, L.; Knyazikhin, Y.; Myneni, R. B.; Strahler, A. H.; Schaaf, C.; Antonarakis, A. S.; Moorcroft, P. R.

    2011-12-01

    The three-dimensional structure of a forest - its composition, density, height, crown geometry, within-crown foliage distribution and properties of individual leaves - has a direct impact on the lidar waveform. The pair-correlation function defined as the probability of finding simultaneously phytoelements at two points is the most natural and physically meaningful descriptor of the canopy structure over wide range of scales. The stochastic radiative transfer equations naturally admit this measure and thus provide a powerful means to investigate 3D canopy from space. NASA's Airborne Laser Vegetation Imaging Sensor (LVIS) and ground based data on canopy structure acquired over 5 sites in New England, California and La Selva (Costa Rica) tropical forest were analyzed to assess the impact of 3D canopy structure on lidar waveform and the ability of stochastic radiative transfer equations to simulate the 3D effects. Our results suggest the pair correlation function is sensitive to horizontal and vertical clumping, crown geometry and spatial distribution of trees. Its use in the stochastic radiative transfer equation allows us to accurately simulate the effects of 3D canopy structure on the lidar waveform. Specifically, we found that (1) attenuation of the waveform occurs at a slower rate than 1D models predict; this may result in an underestimation of foliage profile if 3D effects are ignored; (2) 1D model is unable to match simulated waveform and measured surface reflectance, i.e., an unrealistic high value of surface reflectance needs to be used to simulate ground return of sparse vegetation; (3) spatial distribution of trees has a strong impact on the lidar waveform. Simple analytical models of the pair-correlation function will also be discussed.

  11. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  12. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  13. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  14. Overview of the first Multi-center Airborne Coherent Atmospheric Wind Sensor (MACAWS) experiment: Conversion of a ground-based lidar for airborne applications

    SciTech Connect

    Howell, J.N.; Hardesty, R.M.; Rothermel, J.; Menzies, R.T.

    1996-12-31

    The first Multi-center Airborne Coherent Atmospheric Wind Sensor (MACAWS) field experiment demonstrated an airborne high energy TEA CO{sub 2} Doppler lidar system for measurement of atmospheric wind fields and aerosol structure. The system was deployed on the NASA DC-8 during September 1995 in a series of checkout flights to observe several important atmospheric phenomena, including upper level winds in a Pacific hurricane, marine boundary layer winds, cirrus cloud properties, and land-sea breeze structure. The instrument, with its capability to measure three-dimensional winds and backscatter fields, promises to be a valuable tool for climate and global change, severe weather, and air quality research. In this paper, the authors describe the airborne instrument, assess its performance, discuss future improvements, and show some preliminary results from September experiments.

  15. Lidar Report

    SciTech Connect

    Wollpert.

    2009-04-01

    This report provides an overview of the LiDAR acquisition methodology employed by Woolpert on the 2009 USDA - Savannah River LiDAR Site Project. LiDAR system parameters and flight and equipment information is also included. The LiDAR data acquisition was executed in ten sessions from February 21 through final reflights on March 2, 2009; using two Leica ALS50-II 150kHz Multi-pulse enabled LiDAR Systems. Specific details about the ALS50-II systems are included in Section 4 of this report.

  16. Physical sensor difference-based method and virtual sensor difference-based method for visual and quantitative estimation of lower limb 3D gait posture using accelerometers and magnetometers.

    PubMed

    Liu, Kun; Inoue, Yoshio; Shibata, Kyoko

    2012-01-01

    An approach using a physical sensor difference-based algorithm and a virtual sensor difference-based algorithm to visually and quantitatively confirm lower limb posture was proposed. Three accelerometers and two MAG(3)s (inertial sensor module) were used to measure the accelerations and magnetic field data for the calculation of flexion/extension (FE) and abduction/adduction (AA) angles of hip joint and FE, AA and internal/external rotation (IE) angles of knee joint; then, the trajectories of knee and ankle joints were obtained with the joint angles and segment lengths. There was no integration of acceleration or angular velocity for the joint rotations and positions, which is an improvement on the previous method in recent literature. Compared with the camera motion capture system, the correlation coefficients in five trials were above 0.91 and 0.92 for the hip FE and AA, respectively, and higher than 0.94, 0.93 and 0.93 for the knee joint FE, AA and IE, respectively.

  17. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  18. 3D IC for Future HEP Detectors

    SciTech Connect

    Thom, J.; Lipton, R.; Heintz, U.; Johnson, M.; Narain, M.; Badman, R.; Spiegel, L.; Triphati, M.; Deptuch, G.; Kenney, C.; Parker, S.; Ye, Z.; Siddons, D.

    2014-11-07

    Three dimensional integrated circuit technologies offer the possibility of fabricating large area arrays of sensors integrated with complex electronics with minimal dead area, which makes them ideally suited for applications at the LHC upgraded detectors and other future detectors. Here we describe ongoing R&D efforts to demonstrate functionality of components of such detectors. This also includes the study of integrated 3D electronics with active edge sensors to produce "active tiles" which can be tested and assembled into arrays of arbitrary size with high yield.

  19. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  20. Analysis of Doppler Lidar Data Acquired During the Pentagon Shield Field Campaign

    DTIC Science & Technology

    2011-04-01

    two coherent Doppler lidars deployed during the Pentagon Shield field campaign are analyzed in conjunction with other sensors to characterize the...Observations from two coherent Doppler lidars deployed during the Pentagon Shield field campaign are analyzed in conjunction with other sensors to... coherent Doppler lidars deployed during the Pentagon Shield field campaign are analyzed in conjunction with other sensors to characterize the overall

  1. Lidar configurations for wind turbine control

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmood; Mann, Jakob

    2016-09-01

    Lidar sensors have proved to be very beneficial in the wind energy industry. They can be used for yaw correction, feed-forward pitch control and load verification. However, the current lidars are expensive. One way to reduce the price is to use lidars with few measurement points. Finding the best configuration of an inexpensive lidar in terms of number of measurement points, the measurement distance and the opening angle is the subject of this study. In order to solve the problem, a lidar model is developed and used to measure wind speed in a turbulence box. The effective wind speed measured by the lidar is compared against the effective wind speed on a wind turbine rotor both theoretically and through simulations. The study provides some results to choose the best configuration of the lidar with few measurement points.

  2. The use of lidar as optical remote sensors in the assessment of air quality near oil refineries and petrochemical sites

    NASA Astrophysics Data System (ADS)

    Steffens, Juliana; Landulfo, Eduardo; Guardani, Roberto; Oller do Nascimento, Cláudio A.; Moreira, Andréia

    2008-10-01

    Petrochemical and oil refining facilities play an increasingly important role in the industrial context. The corresponding need for monitoring emissions from these facilities as well as in their neighborhood has raised in importance, leading to the present tendency of creating real time data acquisition and analysis systems. The use of LIDAR-based techniques, both for air quality and emissions monitoring purposes is currently being developed for the area of Cubatao, Sao Paulo, one of the largest petrochemical and industrial sites in Brazil. In a partnership with the University of SÃ#o Paulo (USP) the Brazilian oil company PETROBRAS has implemented an Environmental Research Center - CEPEMA - located in the industrial site, in which the development of fieldwork will be carried out. The current joint R&D project focuses on the development of a real time acquisition system, together with automated multicomponent chemical analysis. Additionally, fugitive emissions from oil processing and storage sites will be measured, together with the main greenhouse gases (CO2, CH4), and aerosols. Our first effort is to assess the potential chemical species coming out of an oil refinery site and to verify which LIDAR technique, DIAL, Raman, fluorescence would be most efficient in detecting and quantifying the specific atmospheric emissions.

  3. LIDAR Surveys for Road Design in Thailand

    DTIC Science & Technology

    2004-11-01

    25th ACRS 2004 Chiang Mai, Thailand 167 New Generation of Sensors and Applications A-4.6 LIDAR SURVEYS FOR... LiDAR , DEM, Road design, Pilot project, Thailand, NBIA ABSTRACT Concerned with environmental and drainage problems associated with road...as hilly, unstable terrain. LiDAR technology is of great interest to DOH as its use can make them save enormous amounts of time and money by providing

  4. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  5. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  6. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  7. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  8. SAR and LIDAR fusion: experiments and applications

    NASA Astrophysics Data System (ADS)

    Edwards, Matthew C.; Zaugg, Evan C.; Bradley, Joshua P.; Bowden, Ryan D.

    2013-05-01

    In recent years ARTEMIS, Inc. has developed a series of compact, versatile Synthetic Aperture Radar (SAR) systems which have been operated on a variety of small manned and unmanned aircraft. The multi-frequency-band SlimSAR has demonstrated a variety of capabilities including maritime and littoral target detection, ground moving target indication, polarimetry, interferometry, change detection, and foliage penetration. ARTEMIS also continues to build upon the radar's capabilities through fusion with other sensors, such as electro-optical and infrared camera gimbals and light detection and ranging (LIDAR) devices. In this paper we focus on experiments and applications employing SAR and LIDAR fusion. LIDAR is similar to radar in that it transmits a signal which, after being reflected or scattered by a target area, is recorded by the sensor. The differences are that a LIDAR uses a laser as a transmitter and optical sensors as a receiver, and the wavelengths used exhibit a very different scattering phenomenology than the microwaves used in radar, making SAR and LIDAR good complementary technologies. LIDAR is used in many applications including agriculture, archeology, geo-science, and surveying. Some typical data products include digital elevation maps of a target area and features and shapes extracted from the data. A set of experiments conducted to demonstrate the fusion of SAR and LIDAR data include a LIDAR DEM used in accurately processing the SAR data of a high relief area (mountainous, urban). Also, feature extraction is used in improving geolocation accuracy of the SAR and LIDAR data.

  9. Study of Droplet Activation in Thin Clouds Using Ground-based Raman Lidar and Ancillary Remote Sensors

    NASA Astrophysics Data System (ADS)

    Rosoldi, Marco; Madonna, Fabio; Gumà Claramunt, Pilar; Pappalardo, Gelsomina

    2015-04-01

    Studies on global climate change show that the effects of aerosol-cloud interactions (ACI) on the Earth's radiation balance and climate, also known as indirect aerosol effects, are the most uncertain among all the effects involving the atmospheric constituents and processes (Stocker et al., IPCC, 2013). Droplet activation is the most important and challenging process in the understanding of ACI. It represents the direct microphysical link between aerosols and clouds and it is probably the largest source of uncertainty in estimating indirect aerosol effects. An accurate estimation of aerosol-clouds microphysical and optical properties in proximity and within the cloud boundaries represents a good frame for the study of droplet activation. This can be obtained by using ground-based profiling remote sensing techniques. In this work, a methodology for the experimental investigation of droplet activation, based on ground-based multi-wavelength Raman lidar and Doppler radar technique, is presented. The study is focused on the observation of thin liquid water clouds, which are low or midlevel super-cooled clouds characterized by a liquid water path (LWP) lower than about 100 gm-2(Turner et al., 2007). These clouds are often optically thin, which means that ground-based Raman lidar allows the detection of the cloud top and of the cloud structure above. Broken clouds are primarily inspected to take advantage of their discontinuous structure using ground based remote sensing. Observations are performed simultaneously with multi-wavelength Raman lidars, a cloud Doppler radar and a microwave radiometer at CIAO (CNR-IMAA Atmospheric Observatory: www.ciao.imaa.cnr.it), in Potenza, Southern Italy (40.60N, 15.72E, 760 m a.s.l.). A statistical study of the variability of optical properties and humidity in the transition from cloudy regions to cloud-free regions surrounding the clouds leads to the identification of threshold values for the optical properties, enabling the

  10. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  11. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  12. 3D Buckligami: Digital Matter

    NASA Astrophysics Data System (ADS)

    van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin

    2014-03-01

    We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.

  13. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  14. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  15. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  16. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  17. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  18. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  19. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  20. The 3D Elevation Program and America's infrastructure

    USGS Publications Warehouse

    Lukas, Vicki; Carswell, Jr., William J.

    2016-11-07

    Infrastructure—the physical framework of transportation, energy, communications, water supply, and other systems—and construction management—the overall planning, coordination, and control of a project from beginning to end—are critical to the Nation’s prosperity. The American Society of Civil Engineers has warned that, despite the importance of the Nation’s infrastructure, it is in fair to poor condition and needs sizable and urgent investments to maintain and modernize it, and to ensure that it is sustainable and resilient. Three-dimensional (3D) light detection and ranging (lidar) elevation data provide valuable productivity, safety, and cost-saving benefits to infrastructure improvement projects and associated construction management. By providing data to users, the 3D Elevation Program (3DEP) of the U.S. Geological Survey reduces users’ costs and risks and allows them to concentrate on their mission objectives. 3DEP includes (1) data acquisition partnerships that leverage funding, (2) contracts with experienced private mapping firms, (3) technical expertise, lidar data standards, and specifications, and (4) most important, public access to high-quality 3D elevation data. The size and breadth of improvements for the Nation’s infrastructure and construction management needs call for an efficient, systematic approach to acquiring foundational 3D elevation data. The 3DEP approach to national data coverage will yield large cost savings over individual project-by-project acquisitions and will ensure that data are accessible for other critical applications.

  1. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  2. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  3. 3D plasma camera for planetary missions

    NASA Astrophysics Data System (ADS)

    Berthomier, Matthieu; Morel, Xavier; Techer, Jean-Denis

    2014-05-01

    A new 3D field-of-view toroidal space plasma analyzer based on an innovative optical concept allows the coverage of 4π str solid angle with only two sensor heads. It fits the need of all-sky thermal plasma measurements on three-axis stabilized spacecraft which are the most commonly used platforms for planetary missions. The 3D plasma analyzer also takes advantage of the new possibilities offered by the development of an ultra low-power multi-channel charge sensitive amplifier used for the imaging detector of the instrument. We present the design and measured performances of a prototype model that will fly on a test rocket in 2014.

  4. 3D Tracking via Shoe Sensing

    PubMed Central

    Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin

    2016-01-01

    Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy. PMID:27801839

  5. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  6. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  7. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  8. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  9. 3D fascicle orientations in triceps surae.

    PubMed

    Rana, Manku; Hamarneh, Ghassan; Wakeling, James M

    2013-07-01

    The aim of this study was to determine the three-dimensional (3D) muscle fascicle architecture in human triceps surae muscles at different contraction levels and muscle lengths. Six male subjects were tested for three contraction levels (0, 30, and 60% of maximal voluntary contraction) and four ankle angles (-15, 0, 15, and 30° of plantar flexion), and the muscles were imaged with B-mode ultrasound coupled to 3D position sensors. 3D fascicle orientations were represented in terms of pennation angle relative to the major axis of the muscle and azimuthal angle (a new architectural parameter introduced in this study representing the radial angle around the major axis). 3D orientations of the fascicles, and the sheets along which they lie, were regionalized in all the three muscles (medial and lateral gastrocnemius and the soleus) and changed significantly with contraction level and ankle angle. Changes in the azimuthal angle were of similar magnitude to the changes in pennation angle. The 3D information was used for an error analysis to determine the errors in predictions of pennation that would occur in purely two-dimensional studies. A comparison was made for assessing pennation in the same plane for different contraction levels, or for adjusting the scanning plane orientation for different contractions: there was no significant difference between the two simulated scanning conditions for the gastrocnemii; however, a significant difference of 4.5° was obtained for the soleus. Correct probe orientation is thus more critical during estimations of pennation for the soleus than the gastrocnemii due to its more complex fascicle arrangement.

  10. The 3D Elevation Program: summary for Michigan

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features. The Michigan Statewide Authoritative Imagery and Lidar (MiSAIL) program provides statewide lidar coordination with local, State, and national groups in support of 3DEP for Michigan.

  11. 3D polymer scaffold arrays.

    PubMed

    Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik

    2011-01-01

    We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.

  12. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  13. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  14. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  15. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  16. Eyjafjallajökull ash concentrations derived from both lidar and modeling

    NASA Astrophysics Data System (ADS)

    Chazette, Patrick; Bocquet, Marc; Royer, Philippe; Winiarek, Victor; Raut, Jean-Christophe; Labazuy, Philippe; Gouhier, Mathieu; Lardier, MéLody; Cariou, Jean-Pierre

    2012-10-01

    Following the eruption of the Icelandic volcano Eyjafjallajökull on the 14 April 2010, ground-based N2-Raman lidar (GBL) measurements were used to trace the temporal evolution of the ash plume from 16 to 20 April 2010 above the southwestern suburb of Paris. The nighttime overpass of the Cloud-Aerosol LIdar with Orthogonal Polarization onboard Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation satellite (CALIPSO/CALIOP) on 17 April 2010 was an opportunity to complement GBL observations. The plume shape retrieved from GBL has been used to assess the size range of the particles size. The lidar-derived aerosol mass concentrations (PM) have been compared with model-derived PM concentrations held in the Eulerian model Polair3D transport model, driven by a source term inferred from the SEVIRI sensor onboard Meteosat satellite. The consistency between model and ground-based wind lidar and CALIOP observations has been checked. The spatial and temporal structures of the ash plume as estimated by each instrument and by the Polair3D simulations are in agreement. The ash plume was associated with a mean aerosol optical thickness of 0.1 ± 0.06 and 0.055 ± 0.053 for GBL (355 nm) and CALIOP (532 nm), respectively. Such values correspond to ash mass concentrations of ˜400 ± 160 and ˜720 ± 670 μg m-3, respectively, within the ash plume, which was lower than 0.5 km in width. The relative uncertainty is ˜75% and mainly due to the assessment of the specific cross-section assuming an aerosol density of 2.6 g cm-3. The simulated ash plume is smoother leading to integrated mass of the same order of magnitude (between 50 and 250 mg m-2).

  17. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  18. Status of the 3D Elevation Program, 2015

    USGS Publications Warehouse

    Sugarbaker, Larry J.; Eldridge, Diane F.; Jason, Allyson L.; Lukas, Vicki; Saghy, David L.; Stoker, Jason M.; Thunen, Diana R.

    2017-01-18

    The 3D Elevation Program (3DEP) is a cooperative activity to collect light detection and ranging (lidar) data for the conterminous United States, Hawaii, and U.S. territories; and interferometric synthetic aperture radar (IfSAR) elevation data for Alaska during an 8-year period. The U.S. Geological Survey (USGS) and partner organizations acquire high-quality three-dimensional elevation data for the United States and its territories that support requirements beyond what could be realized if agencies independently pursued lidar and IfSAR data collection activities. Data collection rates have been increasing as a growing number of State and Federal agencies participate in cooperative data acquisition projects. USGS and partner agencies expanded data collection, completed the initial product delivery systems and implemented changes to the program governance to include a restructuring of the 3DEP working group and formalizing the relationship to the Federal Geographic Data Committee during the final year (2015) of program preparation.

  19. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic