Sample records for point clouds generated

  1. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  2. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  3. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  4. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  5. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  6. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  7. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  8. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    NASA Astrophysics Data System (ADS)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  9. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  10. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  11. Design of relative motion and attitude profiles for three-dimensional resident space object imaging with a laser rangefinder

    NASA Astrophysics Data System (ADS)

    Nayak, M.; Beck, J.; Udrea, B.

    This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.

  12. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  13. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  14. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  15. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  16. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  17. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  18. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  19. The Use of Uas for Rapid 3d Mapping in Geomatics Education

    NASA Astrophysics Data System (ADS)

    Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan

    2016-06-01

    With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.

  20. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  1. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  2. Feasibility of Smartphone Based Photogrammetric Point Clouds for the Generation of Accessibility Maps

    NASA Astrophysics Data System (ADS)

    Angelats, E.; Parés, M. E.; Kumar, P.

    2018-05-01

    Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.

  3. Characterizing Sorghum Panicles using 3D Point Clouds

    NASA Astrophysics Data System (ADS)

    Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.

    2017-12-01

    To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.

  4. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  5. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  6. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  7. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  8. Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Stoter, J.

    2016-06-01

    Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

  9. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    NASA Astrophysics Data System (ADS)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.

  10. Comparison of roadway roughness derived from LIDAR and SFM 3D point clouds.

    DOT National Transportation Integrated Search

    2015-10-01

    This report describes a short-term study undertaken to investigate the potential for using dense three-dimensional (3D) point : clouds generated from light detection and ranging (LIDAR) and photogrammetry to assess roadway roughness. Spatially : cont...

  11. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  12. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  13. Comparing and characterizing three-dimensional point clouds derived by structure from motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Schwind, Michael

    Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.

  14. Nanosatellite Maneuver Planning for Point Cloud Generation With a Rangefinder

    DTIC Science & Technology

    2015-06-05

    aided active vision systems [11], dense stereo [12], and TriDAR [13]. However, these systems are unsuitable for a nanosatellite system from power, size...command profiles as well as improving the fidelity of gap detection with better filtering methods for background objects . For example, attitude...application of a single beam laser rangefinder (LRF) to point cloud generation, shape detection , and shape reconstruction for a space-based space

  15. Semantic Segmentation of Indoor Point Clouds Using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Babacan, K.; Chen, L.; Sohn, G.

    2017-11-01

    As Building Information Modelling (BIM) thrives, geometry becomes no longer sufficient; an ever increasing variety of semantic information is needed to express an indoor model adequately. On the other hand, for the existing buildings, automatically generating semantically enriched BIM from point cloud data is in its infancy. The previous research to enhance the semantic content rely on frameworks in which some specific rules and/or features that are hand coded by specialists. These methods immanently lack generalization and easily break in different circumstances. On this account, a generalized framework is urgently needed to automatically and accurately generate semantic information. Therefore we propose to employ deep learning techniques for the semantic segmentation of point clouds into meaningful parts. More specifically, we build a volumetric data representation in order to efficiently generate the high number of training samples needed to initiate a convolutional neural network architecture. The feedforward propagation is used in such a way to perform the classification in voxel level for achieving semantic segmentation. The method is tested both for a mobile laser scanner point cloud, and a larger scale synthetically generated data. We also demonstrate a case study, in which our method can be effectively used to leverage the extraction of planar surfaces in challenging cluttered indoor environments.

  16. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  17. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  18. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  19. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    NASA Astrophysics Data System (ADS)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  20. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  1. Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory

    NASA Astrophysics Data System (ADS)

    Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.

    2017-09-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  2. D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Hwang, Jin-Tsong; Chu, Ting-Chen

    2016-10-01

    This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.

  3. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    NASA Astrophysics Data System (ADS)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.

  4. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  5. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  6. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    NASA Astrophysics Data System (ADS)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  7. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  8. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  9. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  10. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  11. A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas

    2016-10-01

    In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.

  12. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  13. Photogrammetric DSM denoising

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  14. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  15. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  16. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  17. Feature-based three-dimensional registration for repetitive geometry in machine vision

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2016-01-01

    As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703

  18. Classification of Aerial Photogrammetric 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.

    2017-05-01

    We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.

  19. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  20. a Fast and Flexible Method for Meta-Map Building for Icp Based Slam

    NASA Astrophysics Data System (ADS)

    Kurian, A.; Morin, K. W.

    2016-06-01

    Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.

  1. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  2. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  3. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  4. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds

    PubMed Central

    Sawicki, Piotr

    2018-01-01

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679

  5. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds.

    PubMed

    Gabara, Grzegorz; Sawicki, Piotr

    2018-03-06

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.

  6. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  7. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  8. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  9. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  10. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  11. Comparison of computation time and image quality between full-parallax 4G-pixels CGHs calculated by the point cloud and polygon-based method

    NASA Astrophysics Data System (ADS)

    Nakatsuji, Noriaki; Matsushima, Kyoji

    2017-03-01

    Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.

  12. Automatic Building Abstraction from Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Ley, A.; Hänsch, R.; Hellwich, O.

    2017-09-01

    Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.

  13. Automated Detection and Closing of Holes in Aerial Point Clouds Using AN Uas

    NASA Astrophysics Data System (ADS)

    Fiolka, T.; Rouatbi, F.; Bender, D.

    2017-08-01

    3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.

  14. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  15. Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data

    NASA Astrophysics Data System (ADS)

    Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.

    2016-06-01

    Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.

  16. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  17. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  18. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally, by different commercial software tools, provides essential information for the performance validation of UAS technology.

  19. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    NASA Astrophysics Data System (ADS)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  20. Cirrus cloud model parameterizations: Incorporating realistic ice particle generation

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Dodd, G. C.; Starr, David OC.

    1990-01-01

    Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.

  1. Achievable Rate Estimation of IEEE 802.11ad Visual Big-Data Uplink Access in Cloud-Enabled Surveillance Applications.

    PubMed

    Kim, Joongheon; Kim, Jong-Kook

    2016-01-01

    This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.

  2. First Prismatic Building Model Reconstruction from Tomosar Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Shahzad, M.; Zhu, X.

    2016-06-01

    This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  3. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  4. Comparison of the filtering models for airborne LiDAR data by three classifiers with exploration on model transfer

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Cai, Zhan; Zhang, Liang

    2018-01-01

    This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.

  5. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  6. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less

  7. Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Sheng, Y. H.

    2018-04-01

    To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.

  8. Comparision of photogrammetric point clouds with BIM building elements for construction progress monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2014-08-01

    For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.

  9. Comparison of DSMs acquired by terrestrial laser scanning, UAV-based aerial images and ground-based optical images at the Super-Sauze landslide

    NASA Astrophysics Data System (ADS)

    Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred

    2013-04-01

    In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.

  10. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  11. Low cost digital photogrammetry: From the extraction of point clouds by SFM technique to 3D mathematical modeling

    NASA Astrophysics Data System (ADS)

    Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito

    2017-07-01

    The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).

  12. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  13. Generating DEM from LIDAR data - comparison of available software tools

    NASA Astrophysics Data System (ADS)

    Korzeniowska, K.; Lacka, M.

    2011-12-01

    In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.

  14. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  15. Effect of controlled offset of focal position in cavitation-enhanced high-intensity focused ultrasound treatment

    NASA Astrophysics Data System (ADS)

    Goto, Kota; Takagi, Ryo; Miyashita, Takuya; Jimbo, Hayato; Yoshizawa, Shin; Umemura, Shin-ichiro

    2015-07-01

    High-intensity focused ultrasound (HIFU) is a noninvasive treatment for tumors such as cancer. In this method, ultrasound is generated outside the body and focused to the target tissue. Therefore, physical and mental stresses on the patient are minimal. A drawback of the HIFU treatment is a long treatment time for a large tumor due to the small therapeutic volume by a single exposure. Enhancing the heating effect of ultrasound by cavitation bubbles may solve this problem. However, this is rather difficult because cavitation clouds tend to be formed backward from the focal point while ultrasonic intensity for heating is centered at the focal point. In this study, the focal points of the trigger pulses to generate cavitation were offset forward from those of the heating ultrasound to match the cavitation clouds with the heating patterns. Results suggest that the controlled offset of focal points makes the thermal coagulation more predictable.

  16. Method for cold stable biojet fuel

    DOEpatents

    Seames, Wayne S.; Aulich, Ted

    2015-12-08

    Plant or animal oils are processed to produce a fuel that operates at very cold temperatures and is suitable as an aviation turbine fuel, a diesel fuel, a fuel blendstock, or any fuel having a low cloud point, pour point or freeze point. The process is based on the cracking of plant or animal oils or their associated esters, known as biodiesel, to generate lighter chemical compounds that have substantially lower cloud, pour, and/or freeze points than the original oil or biodiesel. Cracked oil is processed using separation steps together with analysis to collect fractions with desired low temperature properties by removing undesirable compounds that do not possess the desired temperature properties.

  17. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  18. 2.5D multi-view gait recognition based on point cloud registration.

    PubMed

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-03-28

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.

  19. Lidars for smoke and dust cloud diagnostics

    NASA Astrophysics Data System (ADS)

    Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.

    1980-11-01

    An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.

  20. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  1. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  2. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  3. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

  4. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  5. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    PubMed Central

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  6. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.

  7. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  8. Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective

    PubMed Central

    Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso

    2016-01-01

    Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245

  9. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    NASA Astrophysics Data System (ADS)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm for a hemispherical and a triangular wave point cloud.

  10. Hierarchical extraction of urban objects from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  11. D Model of AL Zubarah Fortress in Qatar - Terrestrial Laser Scanning VS. Dense Image Matching

    NASA Astrophysics Data System (ADS)

    Kersten, T.; Mechelke, K.; Maziull, L.

    2015-02-01

    In September 2011 the fortress Al Zubarah, built in 1938 as a typical Arabic fortress and restored in 1987 as a museum, was recorded by the HafenCity University Hamburg using terrestrial laser scanning with the IMAGER 5006h and digital photogrammetry for the Qatar Museum Authority within the framework of the Qatar Islamic Archaeology and Heritage Project. One goal of the object recording was to provide detailed 2D/3D documentation of the fortress. This was used to complete specific detailed restoration work in the recent years. From the registered laser scanning point clouds several cuttings and 2D plans were generated as well as a 3D surface model by triangle meshing. Additionally, point clouds and surface models were automatically generated from digital imagery from a Nikon D70 using the open-source software Bundler/PMVS2, free software VisualSFM, Autodesk Web Service 123D Catch beta, and low-cost software Agisoft PhotoScan. These outputs were compared with the results from terrestrial laser scanning. The point clouds and surface models derived from imagery could not achieve the same quality of geometrical accuracy as laser scanning (i.e. 1-2 cm).

  12. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    PubMed Central

    Pereira, N F; Sitek, A

    2011-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496

  13. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    NASA Astrophysics Data System (ADS)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  14. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  15. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.

    PubMed

    Li, Jilong; Cheng, Jianlin

    2016-05-10

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.

  16. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    PubMed Central

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  17. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    NASA Astrophysics Data System (ADS)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  18. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  19. Research of MPPT for photovoltaic generation based on two-dimensional cloud model

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Fan, Wei

    2013-03-01

    The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.

  20. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    NASA Astrophysics Data System (ADS)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  1. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    NASA Astrophysics Data System (ADS)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  2. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  3. Mobile 3d Mapping with a Low-Cost Uav System

    NASA Astrophysics Data System (ADS)

    Neitzel, F.; Klonowski, J.

    2011-09-01

    In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.

  4. Robust point cloud classification based on multi-level semantic relationships for urban scenes

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo

    2017-07-01

    The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.

  5. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  6. Forest Biomass Mapping from Stereo Imagery and Radar Data

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ni, W.; Zhang, Z.

    2013-12-01

    Both InSAR and lidar data provide critical information on forest vertical structure, which are critical for regional mapping of biomass. However, the regional application of these data is limited by the availability and acquisition costs. Some researchers have demonstrated potentials of stereo imagery in the estimation of forest height. Most of these researches were conducted on aerial images or spaceborne images with very high resolutions (~0.5m). Space-born stereo imagers with global coverage such as ALOS/PRISM have coarser spatial resolutions (2-3m) to achieve wider swath. The features of stereo images are directly affected by resolutions and the approaches use by most of researchers need to be adjusted for stereo imagery with lower resolutions. This study concentrated on analyzing the features of point clouds synthesized from multi-view stereo imagery over forested areas. The small footprint lidar and lidar waveform data were used as references. The triplets of ALOS/PRISM data form three pairs (forward/nadir, backward/nadir and forward/backward) of stereo images. Each pair of the stereo images can be used to generate points (pixels) with 3D coordinates. By carefully co-register these points from three pairs of stereo images, a point cloud data was generated. The height of each point above ground surface was then calculated using DEM from National Elevation Dataset, USGS as the ground surface elevation. The height data were gridded into pixel of different sizes and the histograms of the points within a pixel were analyzed. The average height of the points within a pixel was used as the height of the pixel to generate a canopy height map. The results showed that the synergy of point clouds from different views were necessary, which increased the point density so the point cloud could detect the vertical structure of sparse and unclosed forests. The top layer of multi-layered forest could be captured but the dense forest prevented the stereo imagery to see through. The canopy height map exhibited spatial patterns of roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlation with RH50 of LVIS data at 30m pixel size with a gain of 1.04, bias of 4.3m and R2 of 0.74 (Fig. 1). The canopy height map from PRISM and dual-pol PALSAR data were used together to map biomass in our study area near Howland, Maine, and the results were evaluated using biomass map generated from LVIS waveform data independently. The results showed that adding CHM from PRISM significantly improved biomass accuracy and raised the biomass saturation level of L-band SAR data in forest biomass mapping.

  7. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  8. Multitemporal field-based plant height estimation using 3D point clouds generated from small unmanned aerial systems high-resolution imagery

    NASA Astrophysics Data System (ADS)

    Malambo, L.; Popescu, S. C.; Murray, S. C.; Putman, E.; Pugh, N. A.; Horne, D. W.; Richardson, G.; Sheridan, R.; Rooney, W. L.; Avant, R.; Vidrine, M.; McCutchen, B.; Baltensperger, D.; Bishop, M.

    2018-02-01

    Plant breeders and agronomists are increasingly interested in repeated plant height measurements over large experimental fields to study critical aspects of plant physiology, genetics and environmental conditions during plant growth. However, collecting such measurements using commonly used manual field measurements is inefficient. 3D point clouds generated from unmanned aerial systems (UAS) images using Structure from Motion (SfM) techniques offer a new option for efficiently deriving in-field crop height data. This study evaluated UAS/SfM for multitemporal 3D crop modelling and developed and assessed a methodology for estimating plant height data from point clouds generated using SfM. High-resolution images in visible spectrum were collected weekly across 12 dates from April (planting) to July (harvest) 2016 over 288 maize (Zea mays L.) and 460 sorghum (Sorghum bicolor L.) plots using a DJI Phantom 3 Professional UAS. The study compared SfM point clouds with terrestrial lidar (TLS) at two dates to evaluate the ability of SfM point clouds to accurately capture ground surfaces and crop canopies, both of which are critical for plant height estimation. Extended plant height comparisons were carried out between SfM plant height (the 90th, 95th, 99th percentiles and maximum height) per plot and field plant height measurements at six dates throughout the growing season to test the repeatability and consistency of SfM estimates. High correlations were observed between SfM and TLS data (R2 = 0.88-0.97, RMSE = 0.01-0.02 m and R2 = 0.60-0.77 RMSE = 0.12-0.16 m for the ground surface and canopy comparison, respectively). Extended height comparisons also showed strong correlations (R2 = 0.42-0.91, RMSE = 0.11-0.19 m for maize and R2 = 0.61-0.85, RMSE = 0.12-0.24 m for sorghum). In general, the 90th, 95th and 99th percentile height metrics had higher correlations to field measurements than the maximum metric though differences among them were not statistically significant. The accuracy of SfM plant height estimates fluctuated over the growing period, likely impacted by the changing reflectance regime due to plant development. Overall, these results show a potential path to reducing laborious manual height measurement and enhancing plant research programs through UAS and SfM.

  9. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    PubMed

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  10. Structure Line Detection from LIDAR Point Clouds Using Topological Elevation Analysis

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.; Chen, L. C.

    2012-07-01

    Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA) to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  11. A CERES-like Cloud Property Climatology Using AVHRR Data

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Bedka, K. M.; Yost, C. R.; Trepte, Q.; Bedka, S. T.; Sun-Mack, S.; Doelling, D.

    2015-12-01

    Clouds affect the climate system by modulating the radiation budget and distributing precipitation. Variations in cloud patterns and properties are expected to accompany changes in climate. The NASA Clouds and the Earth's Radiant Energy System (CERES) Project developed an end-to-end analysis system to measure broadband radiances from a radiometer and retrieve cloud properties from collocated high-resolution MODerate-resolution Imaging Spectroradiometer (MODIS) data to generate a long-term climate data record of clouds and clear-sky properties and top-of-atmosphere radiation budget. The first MODIS was not launched until 2000, so the current CERES record is only 15 years long at this point. The core of the algorithms used to retrieve the cloud properties from MODIS is based on the spectral complement of the Advanced Very High Resolution Radiometer (AVHRR), which has been aboard a string of satellites since 1978. The CERES cloud algorithms were adapted for application to AVHRR data and have been used to produce an ongoing CERES-like cloud property and surface temperature product that includes an initial narrowband-based radiation budget. This presentation will summarize this new product, which covers nearly 37 years, and its comparability with cloud parameters from CERES, CALIPSO, and other satellites. Examples of some applications of this dataset are given and the potential for generating a long-term radiation budget CDR is also discussed.

  12. New from the Old - Measuring Coastal Cliff Change with Historical Oblique Aerial Photos

    NASA Astrophysics Data System (ADS)

    Warrick, J. A.; Ritchie, A.

    2016-12-01

    Oblique aerial photographs are commonly collected to document coastal landscapes. Here we show that these historical photographs can be used to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques if adequate photo-to-photo overlaps exist. Focusing on the 60-m high cliffs of Fort Funston, California, photographs from the California Coastal Records Project were combined with ground control points to develop topographic point clouds of the study area for five years between 2002 and 2010. Uncertainties in the results were assessed by comparing SfM-derived point clouds with airborne lidar data, and the differences between these data were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points the root mean squared error between the SfM and lidar data was less than 0.3 m (minimum = 0.18 m) and the mean systematic error was consistently less than 0.10 m. Because of the oblique orientation of the imagery, the SfM-derived point clouds provided coverage on vertical to overhanging portions of the cliff, and point densities from the SfM techniques averaged between 17 and 161 points/m2 on the cliff face. The time-series of topographic point clouds revealed many topographic changes, including landslides, rockfalls and the erosion of landslide talus along the Fort Funston beach. Thus, we concluded that historical oblique photographs, such as those generated by the California Coastal Records Project, can provide useful tools for mapping coastal topography and measuring coastal change.

  13. Evaluating the effectiveness of low cost UAV generated topography for geomorphic change detection

    NASA Astrophysics Data System (ADS)

    Cook, K. L.

    2014-12-01

    With the recent explosion in the use and availability of unmanned aerial vehicle platforms and development of easy to use structure from motion software, UAV based photogrammetry is increasingly being adopted to produce high resolution topography for the study of surface processes. UAV systems can vary substantially in price and complexity, but the tradeoffs between these and the quality of the resulting data are not well constrained. We look at one end of this spectrum and evaluate the effectiveness of a simple low cost UAV setup for obtaining high resolution topography in a challenging field setting. Our study site is the Daan River gorge in western Taiwan, a rapidly eroding bedrock gorge that we have monitored with terrestrial Lidar since 2009. The site presents challenges for the generation and analysis of high resolution topography, including vertical gorge walls, vegetation, wide variation in surface roughness, and a complicated 3D morphology. In order to evaluate the accuracy of the UAV-derived topography, we compare it with terrestrial Lidar data collected during the same survey period. Our UAV setup combines a DJI Phantom 2 quadcopter with a 16 megapixel Canon Powershot camera for a total platform cost of less than $850. The quadcopter is flown manually, and the camera is programmed to take a photograph every 5 seconds, yielding 200-250 pictures per flight. We measured ground control points and targets for both the Lidar scans and the aerial surveys using a Leica RTK GPS with 1-2 cm accuracy. UAV derived point clouds were obtained using Agisoft Photoscan software. We conducted both Lidar and UAV surveys before and after a summer typhoon season, allowing us to evaluate the reliability of the UAV survey to detect geomorphic changes in the range of one to several meters. We find that this simple UAV setup can yield point clouds with an average accuracy on the order of 10 cm compared to the Lidar point clouds. Well-distributed and accurately located ground control points are critical, but we achieve good accuracy with even with relatively few ground control points (25) over a 150,000 sq m area. The large number of photographs taken during each flight also allows us to explore the reproducibility of the UAV-derived topography by generating point clouds from different subsets of photographs taken of the same area during a single survey.

  14. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  15. Physical modeling of 3D and 4D laser imaging

    NASA Astrophysics Data System (ADS)

    Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard

    2010-04-01

    Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.

  16. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  17. Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.

    2018-05-01

    Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.

  18. The registration of non-cooperative moving targets laser point cloud in different view point

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.

  19. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  20. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  1. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    PubMed

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  2. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features

    PubMed Central

    Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-01-01

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096

  3. a Voxel-Based Metadata Structure for Change Detection in Point Clouds of Large-Scale Urban Areas

    NASA Astrophysics Data System (ADS)

    Gehrung, J.; Hebel, M.; Arens, M.; Stilla, U.

    2018-05-01

    Mobile laser scanning has not only the potential to create detailed representations of urban environments, but also to determine changes up to a very detailed level. An environment representation for change detection in large scale urban environments based on point clouds has drawbacks in terms of memory scalability. Volumes, however, are a promising building block for memory efficient change detection methods. The challenge of working with 3D occupancy grids is that the usual raycasting-based methods applied for their generation lead to artifacts caused by the traversal of unfavorable discretized space. These artifacts have the potential to distort the state of voxels in close proximity to planar structures. In this work we propose a raycasting approach that utilizes knowledge about planar surfaces to completely prevent this kind of artifacts. To demonstrate the capabilities of our approach, a method for the iterative volumetric approximation of point clouds that allows to speed up the raycasting by 36 percent is proposed.

  4. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less

  5. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-01-01

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747

  6. Valorisation of urban elements through 3D models generated from image matching point clouds and augmented reality visualization based in mobile platforms

    NASA Astrophysics Data System (ADS)

    Marques, Luís.; Roca Cladera, Josep; Tenedório, José António

    2017-10-01

    The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.

  7. Evaluating the accuracy of low cost UAV generated topography and its effectiveness for geomorphic change detection

    NASA Astrophysics Data System (ADS)

    Cook, Kristen

    2015-04-01

    With the recent explosion in the use and availability of unmanned aerial vehicle platforms and development of easy to use structure from motion (SfM) software, UAV based photogrammetry is increasingly being adopted to produce high resolution topography for the study of surface processes. UAV systems can vary substantially in price and complexity, but the tradeoffs between these and the quality of the resulting data are not well constrained. We look at one end of this spectrum and evaluate the effectiveness of a simple low cost UAV setup for obtaining high resolution topography in a challenging field setting. Our study site is the Daan River gorge in western Taiwan, a rapidly eroding bedrock gorge that we have monitored with terrestrial Lidar since 2009. The site presents challenges for the generation and analysis of high resolution topography, including vertical gorge walls, vegetation, wide variation in surface roughness, and a complicated 3D morphology. In order to evaluate the accuracy of the UAV-derived topography, we compare it with terrestrial Lidar data collected during the same survey period. Our UAV setup combines a DJI Phantom 2 quadcopter with a 16 megapixel Canon Powershot camera for a total platform cost of less than 850. The quadcopter is flown manually, and the camera is programmed to take a photograph every 4 seconds, yielding 200-250 pictures per flight. We measured ground control points and targets for both the Lidar scans and the aerial surveys using a Leica RTK GPS with 1-2 cm accuracy. UAV derived point clouds were obtained using Agisoft Photoscan software. We conducted both Lidar and UAV surveys before and after the 2014 typhoon season, allowing us to evaluate the reliability of the UAV survey to detect geomorphic changes in the range of one to several meters. The accuracy of the SfM point clouds depends strongly on the characteristics of the surface being considered, with vegetation and small scale texture causing inaccuracies. However, we find that this simple UAV setup can yield point clouds with 78% of points within 20 cm and 60% within 10 cm of the Lidar point clouds, with the higher errors dominated by vegetation effects. Well-distributed and accurately located ground control points are critical, but we achieve good accuracy with even with relatively few ground control points (25) over a 150,000 sq m area. The large number of photographs taken during each flight also allows us to explore the reproducibility of the UAV-derived topography by generating point clouds from different subsets of photographs taken of the same area during a single survey. These results show the same pattern of higher errors due to vegetation, but bedrock surfaces generally have errors of less than 4 cm. These results suggest that even very basic UAV surveys can yield data suitable for measuring geomorphic change on the scale of a channel reach.

  8. Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Brasington, J.; Caruso, B.

    2014-05-01

    Recent advances in computer vision and image analysis have led to the development of a novel, fully automated photogrammetric method to generate dense 3d point cloud data. This approach, termed Structure-from-Motion or SfM, requires only limited ground-control and is ideally suited to imagery obtained from low-cost, non-metric cameras acquired either at close-range or using aerial platforms. Terrain models generated using SfM have begun to emerge recently and with a growing spectrum of software now available, there is an urgent need to provide a robust quality assessment of the data products generated using standard field and computational workflows. To address this demand, we present a detailed error analysis of sub-meter resolution terrain models of two contiguous reaches (1.6 and 1.7 km long) of the braided Ahuriri River, New Zealand, generated using SfM. A six stage methodology is described, involving: i) hand-held image acquisition from an aerial platform, ii) 3d point cloud extraction modeling using Agisoft PhotoScan, iii) georeferencing on a redundant network of GPS-surveyed ground-control points, iv) point cloud filtering to reduce computational demand as well as reduce vegetation noise, v) optical bathymetric modeling of inundated areas; and vi) data fusion and surface modeling to generate sub-meter raster terrain models. Bootstrapped geo-registration as well as extensive distributed GPS and sonar-based bathymetric check-data were used to quantify the quality of the models generated after each processing step. The results obtained provide the first quantified analysis of SfM applied to model the complex terrain of a braided river. Results indicate that geo-registration errors of 0.04 m (planar) and 0.10 m (elevation) and vertical surface errors of 0.10 m in non-vegetation areas can be achieved from a dataset of photographs taken at 600 m and 800 m above the ground level. These encouraging results suggest that this low-cost, logistically simple method can deliver high quality terrain datasets competitive with those obtained with significantly more expensive laser scanning, and suitable for geomorphic change detection and hydrodynamic modeling.

  9. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  10. Application for 3d Scene Understanding in Detecting Discharge of Domesticwaste Along Complex Urban Rivers

    NASA Astrophysics Data System (ADS)

    Ninsalam, Y.; Qin, R.; Rekittke, J.

    2016-06-01

    In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1) a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2) depth for each image is generated through a backward projection of the point clouds; 3) a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D) data; 4) point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5) then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.

  11. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  12. Chatter detection in turning using persistent homology

    NASA Astrophysics Data System (ADS)

    Khasawneh, Firas A.; Munch, Elizabeth

    2016-03-01

    This paper describes a new approach for ascertaining the stability of stochastic dynamical systems in their parameter space by examining their time series using topological data analysis (TDA). We illustrate the approach using a nonlinear delayed model that describes the tool oscillations due to self-excited vibrations in turning. Each time series is generated using the Euler-Maruyama method and a corresponding point cloud is obtained using the Takens embedding. The point cloud can then be analyzed using a tool from TDA known as persistent homology. The results of this study show that the described approach can be used for analyzing datasets of delay dynamical systems generated both from numerical simulation and experimental data. The contributions of this paper include presenting for the first time a topological approach for investigating the stability of a class of nonlinear stochastic delay equations, and introducing a new application of TDA to machining processes.

  13. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  14. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  15. An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection

    NASA Astrophysics Data System (ADS)

    Cook, Kristen L.

    2017-02-01

    The measurement of topography and of topographic change is essential for the study of many geomorphic processes. In recent years, structure from motion (SfM) techniques applied to photographs taken by camera-equipped unmanned aerial vehicles (UAVs) has become a powerful new tool for the generation of high resolution topography. The variety of available UAV systems continues to increase rapidly, but it is not clear whether increased UAV sophistication translates into improved quality of the calculated topography. To evaluate the lower end of the UAV spectrum, a simple low cost UAV was deployed to calculate high resolution topography in the Daan River gorge in western Taiwan, a site with a complicated 3D morphology and a wide range of surface types, making it a challenging site for topographic measurement. Terrestrial lidar surveys were conducted in parallel with UAV surveys in both June and November 2014, enabling an assessment of the reliability of the UAV survey to detect geomorphic changes in the range of 30 cm to several meters. A further UAV survey was conducted in June 2015 in order to quantify changes resulting from the 2015 spring monsoon. To evaluate the accuracy of the UAV derived topography, it was compared to terrestrial lidar data collected during the same survey period using the cloud-to-cloud comparison algorithm M3C2. The UAV-generated point clouds match the lidar point clouds well, with RMS errors of 30-40 cm; however, the accuracy of the SfM point clouds depends strongly on the characteristics of the surface being considered, with vegetation, water, and small scale texture causing inaccuracies. The lidar and SfM data yield similar maps of change from June to November 2014, with the same areas of geomorphic change detected by both methods. The SfM-generated change map for November 2014 to June 2015 indicates that the 2015 spring monsoon caused erosion throughout the gorge and highlights the importance of event-driven erosion in the Daan River. The results suggest that even very basic UAVs can yield data suitable for measuring geomorphic change on the scale of a channel reach.

  16. Pillars of Creation among Destruction: Star Formation in Molecular Clouds near R136 in 30 Doradus

    NASA Astrophysics Data System (ADS)

    Kalari, Venu M.; Rubio, Mónica; Elmegreen, Bruce G.; Guzmán, Viviana V.; Zinnecker, Hans; Herrera, Cinthya N.

    2018-01-01

    We present new sensitive CO(2–1) observations of the 30 Doradus region in the Large Magellanic Cloud. We identify a chain of three newly discovered molecular clouds that we name KN1, KN2, and KN3 lying within 2–14 pc in projection from the young massive cluster R136 in 30 Doradus. Excited H2 2.12 μm emission is spatially coincident with the molecular clouds, but ionized Brγ emission is not. We interpret these observations as the tails of pillar-like structures whose ionized heads are pointing toward R136. Based on infrared photometry, we identify a new generation of stars forming within this structure.

  17. Analysis, Thematic Maps and Data Mining from Point Cloud to Ontology for Software Development

    NASA Astrophysics Data System (ADS)

    Nespeca, R.; De Luca, L.

    2016-06-01

    The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based) are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.

  18. Outcrop-scale fracture trace identification using surface roughness derived from a high-density point cloud

    NASA Astrophysics Data System (ADS)

    Okyay, U.; Glennie, C. L.; Khan, S.

    2017-12-01

    Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.

  19. Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.

    2017-08-01

    The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.

  20. Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm

    NASA Astrophysics Data System (ADS)

    Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.

    2017-09-01

    Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.

  1. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  2. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  3. Impact-generated magnetic fields on the Moon : a magnetohydrodynamic numerical investigation

    NASA Astrophysics Data System (ADS)

    Oran, Rona; Shprits, Yuri; Weiss, Benjamin; Gombosi, Tamas

    2015-04-01

    Natural remanent magnetization has been identified in lunar rocks, the lunar crust, and a diversity of meteorites. Much of this magnetization is thought to have been produced by cooling a core dynamo mag-netic field. However, the identification of lunar crustal magnetic anomalies at the antipodes of four of the five youngest large (>600 km diameter) impact basins has motivated the alternative hypothesis that the lunar crust could have been magnetized by the impacts. In particular, it has been proposed that highly conducting ionized vapor produced by a basin-forming impact interacts with the ambient solar wind plasma surrounding the Moon to amplify the ambient solar wind magnetic field or any core dynamo field. In this picture, as the ionized vapor cloud expands around the Moon, it pushes and compresses the solar wind plasma into a small region at the antipodal point. The conservation of magnetic flux then leads to an enhanced magnetic field in the compressed plasma. This field can then be recorded as shock remanent magnetization by crustal materials at the antipodal point following the impact of converging basin ejecta. A key requirement for the impact-generated fields hypothesis is that the compressed field be suffi-ciently strong to explain the lunar paleointensities (at least tens of μT) and maintained at the antipodal point for a sufficiently long time (several hours) for the ejecta to arrive and impact the surface. Previous simulations of the expansion of the vapor cloud found that the enhanced field will be strong enough (per-haps reaching hundreds of μT) and will remain at the antipodal site for a sufficiently long time (>1 day) for the arrival of incoming ejecta. However, these studies did not include an explicit calculation of the interaction of the magnetized solar wind plasma with the vapor cloud. Rather, the cloud evolution under the lunar gravity was simulated in the purely hydrodynamic regime. The vapor cloud structure at certain times was used to derive a simplified picture of what the effects would be on an ambient magnetized plasma using general magnetohydrodynamic (MHD) arguments. The solar wind drag acting on the cloud, as well as MHD effects such as field lines stretching and magnetic reconnection were not taken into ac-count. With the advances made in computational MHD models in recent years, we can now revisit these ear-lier important models. Our goal is to perform the first MHD simulations of an impact-generated vapor cloud expanding in the solar wind around the Moon, using BATSRUS, a 3D highly-parallelized versatile MHD code developed at the University of Michigan, in order to self-consistently test the previous estima-tions of the strength and duration of the magnetic field enhancement at the antipodal points. We will con-sider different MHD processes, such as: 1) the finite resistivity of the lunar mantle 2) magnetic diffusion between the solar wind and the initially non-magnetized cloud, 3) magnetic reconnection at the antipode, and 4) viscous drag and the transport of magnetic flux due to solar wind motion, and 4) MHD instabili-ties. This will allow us to systematically examine whether impact-generated fields can indeed be respon-sible for the formation of crustal field enhancements on the Moon.

  4. Development of modulated optical transmission system to determinate the cloud and freezing points in biofuels.

    PubMed

    Jaramillo-Ochoa, Liliana; Ramirez-Gutierrez, Cristian F; Sánchez-Moguel, Alonso; Acosta-Osorio, Andrés; Rodriguez-Garcia, Mario E

    2015-01-01

    This work is focused in the development of a modulated optical transmission system with temperature control to determine the thermal properties of biodiesels such as the cloud and freezing points. This system is able to determine these properties in real time without relying on the operator skills as indicated in the American Society for Testing Materials (ASTM) norms. Thanks to the modulation of the incident laser, the noise of the signal is reduced and two information channels are generated: amplitude and phase. Lasers with different wavelengths can be used in this system but the sample under study must have optical absorption at the wavelength of the laser.

  5. Instruments and Methodologies for the Underwater Tridimensional Digitization and Data Musealization

    NASA Astrophysics Data System (ADS)

    Repola, L.; Memmolo, R.; Signoretti, D.

    2015-04-01

    In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.

  6. LSAH: a fast and efficient local surface feature for point cloud registration

    NASA Astrophysics Data System (ADS)

    Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi

    2018-04-01

    Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.

  7. a Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Li, Minglei

    2018-04-01

    Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based over-segmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and will be used as nodes of the graph. The Kruskal coordinates are used to compute edge weights that are proportional to the geodesic distance between nodes. Then we compute the edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths between super voxel nodes on the scene surface. The final segmentation results are generated by clustering similar super voxels and cutting off the weak edges in the graph. The performance of this method was evaluated on LiDAR point clouds for both indoor and outdoor scenes. Additionally, extensive comparisons to state of the art techniques show that our algorithm outperforms on many metrics.

  8. 3-D Deformation Field Of The 2010 El Mayor-Cucapah (Mexico) Earthquake From Matching Before To After Aerial Lidar Point Clouds

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Arrowsmith, R.; Krishnan, A. K.; Saripalli, S.; Oskin, M. E.; Arregui, S. M.; Limon, J. F.

    2012-12-01

    The Mw 7.2 El Mayor-Cucapah earthquake (EMCE) of 4 April 2010 generated a ~110 km long, NW-SE trending rupture, with normal and right-lateral slip in the order of 2-3m in the Sierra Cucapah, the northern half, where the surface rupture has the most outstanding expression. Vertical and horizontal surface displacements produced by the EMCE have been addressed separately by other authors with a variety of aerial and satellite remote sensing techniques. Slip variation along fault and post-seismic scarp erosion and diffusion have been estimated in other studies using terrestrial LiDAR (TLS) on segments of the rupture. To complement these other studies, we computed the 3D deformation field by comparing pre- to post-event point clouds from aerial LiDAR surveys. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3-dimensional surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising a translation and a rotation) that best aligns the pre- to post-event points. Testing on synthetic datasets perturbed with displacements of known magnitude showed that windows with dimensions of 100-200m gave the best results for datasets with these densities. Here we present the deformation field with detailed displacements in segments of the surface rupture where its expression was recognized by ICP from the point cloud matching, mainly the scarcely vegetated Sierra Cucapah with the Borrego and Paso Superior fault segments the most outstanding, where we are able to compare our results with values measured in the field and results from TLS reported in other works. EMC simulated displacement field for a 2m right lateral normal (east block down) slip on the pre-event point cloud along the Borrego fault on Sierra Cucapah. Shaded DEM from post-event point cloud as backdrop.

  9. Space Weather Connections to Clouds and Climate

    NASA Astrophysics Data System (ADS)

    Tinsley, B. A.

    2004-12-01

    There is now a considerable amount of observational data and theoretical work pointing to a link between space weather and atmospheric electricity, and then between atmospheric electricity and cloud cover and precipitation, which ultimately affect climate and the biosphere. Studies so far have been largely confined to the Earth, but may be applicable to all planets with clouds in their atmospheres. The current density Jz, that is the return current flowing downward through clouds in the global circuit, is modulated by the galactic cosmic ray flux; by solar energetic particles; by the dawn-dusk polar cap potential difference; and by the precipitation of relativistic electrons from the radiation belts. The flow of Jz through clouds generates unipolar space charge, which is positive at cloud tops and negative at cloud base. This charge attaches to aerosol particles, and affects their interaction with other particles and droplets. Ultrafine aerosol particles are formed around ions and are preserved from scavenging on background aerosols, and preserved for growth by vapor deposition, by space charge at the bases and tops of layer clouds. There is electro-preservation of both ultrafines and of existing CCN that leads to increases in CCN concentration, and increases in cloud cover and reduction in both droplet size and precipitation by the `indirect aerosol effect'. For cold clouds and larger aerosol particles that act as ice forming nuclei, the rate of scavenging of the IFN by large supercooled droplets varies with space charge. Changes in space weather affect both ion production and Jz in planetary atmospheres. In addition, changes in cosmic ray flux affect conductivity within thunderclouds and may affect the output of the thundercloud generators in the global circuit. Thus all four processes, (a) ion-induced nucleation, (b) electro-preservation of leading to increases in CCN concentration and the indirect aerosol effect, (c) contact ice nucleation affecting the production of ices, (d) cosmic ray effects on the generators of the global circuit, are potential links between space weather and life on planets.

  10. Why is the Magellanic Stream so Turbulent? - A Simulational Study

    NASA Astrophysics Data System (ADS)

    Williams, Elliott; Shelton, Robin L.

    2018-06-01

    As the Large and Small Magellanic Clouds travel through the Milky Way (MW) halo, gas is tidally and ram pressure stripped from them, forming the Leading Arm (LA) and Magellanic Stream (MS). The evolution of the LA and MS are an interest to astronomers because there is evidence that the diffuse gas that has been stripped off is able to fall onto the galactic disk and cool enough to fuel star formation in the MW. For et al, 2014 published a catalog of 251 high velocity clouds (HVCs) in the MS, many of which have head-tail morphologies, suggesting interaction with the Milky Way’s halo or other gas in the MS. For et al noticed that the pointing direction of the HVCs are random, which they interpreted as an indication of strong turbulence. They suggested the shock cascade scenario as a contributing process, where ablated cloud material generates turbulence (and H-alpha emission). We take a closer look at this process via simulations. We ran numerical simulations of clouds in the MS using the University of Chicago’s FLASH software. We simulated cases that had two clouds, where one trailed behind the other, and we simulated cases that had one cloud in order to examine the effects of drafting on cloud dynamics and velocity dispersion. Initial cloud temperatures ranged from 100 K to 20,000 K. We have created velocity dispersion maps from the FLASH simulation data to visualize turbulence. We compare these generated maps with 21 cm observations (most recently Westmeier, 2017), in order to search for signatures similar to the small scale turbulence seen in the simulations. We find that if the clouds are initially near to each other, then drafting allows the trailing cloud to catch the leading cloud and mix together. For greater separations, Kelvin-Helmholtz instabilities disrupt the clouds enough before impact that drafting has a minimal role. Our velocity dispersion maps of the warmer clouds closely match values published in For et al, 2014; although, thermal broadening accounts for a large fraction of the velocity dispersion found in the generated maps.

  11. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  12. Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces

    NASA Astrophysics Data System (ADS)

    Theiler, P. W.; Schindler, K.

    2012-07-01

    Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  13. Developing Present-day Proxy Cases Based on NARVAL Data for Investigating Low Level Cloud Responses to Future Climate Change.

    NASA Astrophysics Data System (ADS)

    Reilly, Stephanie

    2017-04-01

    The energy budget of the entire global climate is significantly influenced by the presence of boundary layer clouds. The main aim of the High Definition Clouds and Precipitation for Advancing Climate Prediction (HD(CP)2) project is to improve climate model predictions by means of process studies of clouds and precipitation. This study makes use of observed elevated moisture layers as a proxy of future changes in tropospheric humidity. The associated impact on radiative transfer triggers fast responses in boundary layer clouds, providing a framework for investigating this phenomenon. The investigation will be carried out using data gathered during the Next-generation Aircraft Remote-sensing for VALidation (NARVAL) South campaigns. Observational data will be combined with ECMWF reanalysis data to derive the large scale forcings for the Large Eddy Simulations (LES). Simulations will be generated for a range of elevated moisture layers, spanning a multi-dimensional phase space in depth, amplitude, elevation, and cloudiness. The NARVAL locations will function as anchor-points. The results of the large eddy simulations and the observations will be studied and compared in an attempt to determine how simulated boundary layer clouds react to changes in radiative transfer from the free troposphere. Preliminary LES results will be presented and discussed.

  14. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    PubMed Central

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m2. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. PMID:27854315

  15. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  16. PROCAMS - A second generation multispectral-multitemporal data processing system for agricultural mensuration

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Nalepka, R. F.

    1976-01-01

    PROCAMS (Prototype Classification and Mensuration System) has been designed for the classification and mensuration of agricultural crops (specifically small grains including wheat, rye, oats, and barley) through the use of data provided by Landsat. The system includes signature extension as a major feature and incorporates multitemporal as well as early season unitemporal approaches for using multiple training sites. Also addressed are partial cloud cover and cloud shadows, bad data points and lines, as well as changing sun angle and atmospheric state variations.

  17. Foliage penetration by using 4-D point cloud data

    NASA Astrophysics Data System (ADS)

    Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.

    2012-06-01

    Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.

  18. Land Survey from Unmaned Aerial Veichle

    NASA Astrophysics Data System (ADS)

    Peterman, V.; Mesarič, M.

    2012-07-01

    In this paper we present, how we use a quadrocopter unmanned aerial vehicle with a camera attached to it, to do low altitude photogrammetric land survey. We use the quadrocopter to take highly overlapping photos of the area of interest. A "structure from motion" algorithm is implemented to get parameters of camera orientations and to generate a sparse point cloud representation of objects in photos. Than a patch based multi view stereo algorithm is applied to generate a dense point cloud. Ground control points are used to georeference the data. Further processing is applied to generate digital orthophoto maps, digital surface models, digital terrain models and assess volumes of various types of material. Practical examples of land survey from a UAV are presented in the paper. We explain how we used our system to monitor the reconstruction of commercial building, then how our UAV was used to assess the volume of coal supply for Ljubljana heating plant. Further example shows the usefulness of low altitude photogrammetry for documentation of archaeological excavations. In the final example we present how we used our UAV to prepare an underlay map for natural gas pipeline's route planning. In the final analysis we conclude that low altitude photogrammetry can help bridge the gap between laser scanning and classic tachymetric survey, since it offers advantages of both techniques.

  19. 3DNOW: Image-Based 3d Reconstruction and Modeling via Web

    NASA Astrophysics Data System (ADS)

    Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.

    2018-05-01

    This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.

  20. Novel Methods for Measuring LiDAR

    NASA Astrophysics Data System (ADS)

    Ayrey, E.; Hayes, D. J.; Fraver, S.; Weiskittel, A.; Cook, B.; Kershaw, J.

    2017-12-01

    The estimation of forest biometrics from airborne LiDAR data has become invaluable for quantifying forest carbon stocks, forest and wildlife ecology research, and sustainable forest management. The area-based approach is arguably the most common method for developing enhanced forest inventories from LiDAR. It involves taking a series of vertical height measurements of the point cloud, then using those measurements with field measured data to develop predictive models. Unfortunately, there is considerable variation in methodology for collecting point cloud data, which can vary in pulse density, seasonality, canopy penetrability, and instrument specifications. Today there exists a wealth of public LiDAR data, however the variation in acquisition parameters makes forest inventory prediction by traditional means unreliable across the different datasets. The goal of this project is to test a series of novel point cloud measurements developed along a conceptual spectrum of human interpretability, and then to use the best measurements to develop regional enhanced forest inventories on Northern New England's and Atlantic Canada's public LiDAR. Similarly to a field-based inventory, individual tree crowns are being segmented, and summary statistics are being used as covariates. Established competition and structural indices are being generated using each tree's relationship to one another, whilst existing allometric equations are being used to estimate diameter and biomass of each tree measured in the LiDAR. Novel metrics measuring light interception, clusteredness, and rugosity are also being measured as predictors. On the other end of the human interpretability spectrum, convolutional neural networks are being employed to directly measure both the canopy height model, and the point clouds by scanning each using two and three dimensional kernals trained to identify features useful for predicting biological attributes such as biomass. Predictive models will be trained and tested against one another using 28 different sites and over 42 different LiDAR acquisitions. The optimal model will then be used to generate regional wall-to-wall forest inventories at a 10 m resolution.

  1. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  2. Fast rockfall hazard assessment along a road section using the new LYNX Mobile Mapper Lidar

    NASA Astrophysics Data System (ADS)

    Dario, Carrea; Celine, Longchamp; Michel, Jaboyedoff; Marc, Choffet; Marc-Henri, Derron; Clement, Michoud; Andrea, Pedrazzini; Dario, Conforti; Michael, Leslar; William, Tompkinson

    2010-05-01

    The terrestrial laser scanning (TLS) is an active remote sensing technique providing high resolution point clouds of the topography. The high resolution digital elevations models (HRDEM) derived of these point clouds are an important tool for the stability analysis of slopes. The LYNX Mobile Mapper is a new TLS generation developed by Optech. Its particularity is to be mounted on a vehicle and providing a 360° high density point cloud at 200-khz measurement rate in a very short acquisition time. It is composed of two sensors improving the resolution and reducing the laser shadowing. The spatial resolution is better than 10 cm at 10 m range and at a velocity of 50 km/h and the reflectivity of the signal is around 20% at a distance of 200 m. The Lidar is also equipped with a DGPS and an inertial measurement unit (IMU) which gives real time position and georeferences directly the point cloud. Thanks to its ability to provide a continuous data set from an extended area along a road, this TLS system is useful for rockfall hazard assessment. In addition, this new scanner decrease considerably the time spent in the field and the postprocessing is reduced thanks to resultant georeferenced data. Nevertheless, its application is limited to an area close to the road. The LYNX has been tested near Pontarlier (France) along roads sections affected by rockfall. Regarding to the tectonic context, the studied area is located in the Folded Jura mainly composed of limestone. The result is a very detailed point cloud with a point spacing of 4 cm. The LYNX presents detailed topography on which a structural analysis has been carried out using COLTOP-3D. It allows obtaining a full structural description along the road. In addition, kinematic tests coupled with probabilistic analysis give a susceptibility map of the road cut or natural cliffs above the road. Comparisons with field survey confirm the Lidar approach.

  3. Looking for Off-Fault Deformation and Measuring Strain Accumulation During the Past 70 years on a Portion of the Locked San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Vadman, M.; Bemis, S. P.

    2017-12-01

    Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of this fault (1857). Where sufficient aerial photograph coverage is available, this approach has the potential to illuminate complex fault zone processes for this and other major strike-slip faults.

  4. Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.

    2018-04-01

    Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  5. Erosion and Channel Incision Analysis with High-Resolution Lidar

    NASA Astrophysics Data System (ADS)

    Potapenko, J.; Bookhagen, B.

    2013-12-01

    High-resolution LiDAR (LIght Detection And Ranging) provides a new generation of sub-meter topographic data that is still to be fully exploited by the Earth science communities. We make use of multi-temporal airborne and terrestrial lidar scans in the south-central California and Santa Barbara area. Specifically, we have investigated the Mission Canyon and Channel Islands regions from 2009-2011 to study changes in erosion and channel incision on the landscape. In addition to gridding the lidar data into digital elevation models (DEMs), we also make use of raw lidar point clouds and triangulated irregular networks (TINs) for detailed analysis of heterogeneously spaced topographic data. Using recent advancements in lidar point cloud processing from information technology disciplines, we have employed novel lidar point cloud processing and feature detection algorithms to automate the detection of deeply incised channels and gullies, vegetation, and other derived metrics (e.g. estimates of eroded volume). Our analysis compares topographically-derived erosion volumes to field-derived cosmogenic radionuclide age and in-situ sediment-flux measurements. First results indicate that gully erosion accounts for up to 60% of the sediment volume removed from the Mission Canyon region. Furthermore, we observe that gully erosion and upstream arroyo propagation accelerated after fires, especially in regions where vegetation was heavily burned. The use of high-resolution lidar point cloud data for topographic analysis is still a novel method that needs more precedent and we hope to provide a cogent example of this approach with our research.

  6. Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds.

    PubMed

    Spits, Christine; Wallace, Luke; Reinke, Karin

    2017-04-20

    Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.

  7. Measurement of optical blurring in a turbulent cloud chamber

    NASA Astrophysics Data System (ADS)

    Packard, Corey D.; Ciochetto, David S.; Cantrell, Will H.; Roggemann, Michael C.; Shaw, Raymond A.

    2016-10-01

    Earth's atmosphere can significantly impact the propagation of electromagnetic radiation, degrading the performance of imaging systems. Deleterious effects of the atmosphere include turbulence, absorption and scattering by particulates. Turbulence leads to blurring, while absorption attenuates the energy that reaches imaging sensors. The optical properties of aerosols and clouds also impact radiation propagation via scattering, resulting in decorrelation from unscattered light. Models have been proposed for calculating a point spread function (PSF) for aerosol scattering, providing a method for simulating the contrast and spatial detail expected when imaging through atmospheres with significant aerosol optical depth. However, these synthetic images and their predicating theory would benefit from comparison with measurements in a controlled environment. Recently, Michigan Technological University (MTU) has designed a novel laboratory cloud chamber. This multiphase, turbulent "Pi Chamber" is capable of pressures down to 100 hPa and temperatures from -55 to +55°C. Additionally, humidity and aerosol concentrations are controllable. These boundary conditions can be combined to form and sustain clouds in an instrumented laboratory setting for measuring the impact of clouds on radiation propagation. This paper describes an experiment to generate mixing and expansion clouds in supersaturated conditions with salt aerosols, and an example of measured imagery viewed through the generated cloud is shown. Aerosol and cloud droplet distributions measured during the experiment are used to predict scattering PSF and MTF curves, and a methodology for validating existing theory is detailed. Measured atmospheric inputs will be used to simulate aerosol-induced image degradation for comparison with measured imagery taken through actual cloud conditions. The aerosol MTF will be experimentally calculated and compared to theoretical expressions. The key result of this study is the proposal of a closure experiment for verification of theoretical aerosol effects using actual clouds in a controlled laboratory setting.

  8. Integration of Point Clouds from Terrestrial Laser Scanning and Image-Based Matching for Generating High-Resolution Orthoimages

    NASA Astrophysics Data System (ADS)

    Salach, A.; Markiewicza, J. S.; Zawieska, D.

    2016-06-01

    An orthoimage is one of the basic photogrammetric products used for architectural documentation of historical objects; recently, it has become a standard in such work. Considering the increasing popularity of photogrammetric techniques applied in the cultural heritage domain, this research examines the two most popular measuring technologies: terrestrial laser scanning, and automatic processing of digital photographs. The basic objective of the performed works presented in this paper was to optimize the quality of generated high-resolution orthoimages using integration of data acquired by a Z+F 5006 terrestrial laser scanner and a Canon EOS 5D Mark II digital camera. The subject was one of the walls of the "Blue Chamber" of the Museum of King Jan III's Palace at Wilanów (Warsaw, Poland). The high-resolution images resulting from integration of the point clouds acquired by the different methods were analysed in detail with respect to geometric and radiometric correctness.

  9. a Framework for Voxel-Based Global Scale Modeling of Urban Environments

    NASA Astrophysics Data System (ADS)

    Gehrung, Joachim; Hebel, Marcus; Arens, Michael; Stilla, Uwe

    2016-10-01

    The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.

  10. Imaging Systems for Size Measurements of Debrisat Fragments

    NASA Technical Reports Server (NTRS)

    Shiotani, B.; Scruggs, T.; Toledo, R.; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.

    2017-01-01

    The overall objective of the DebriSat project is to provide data to update existing standard spacecraft breakup models. One of the key sets of parameters used in these models is the physical dimensions of the fragments (i.e., length, average-cross sectional area, and volume). For the DebriSat project, only fragments with at least one dimension greater than 2 mm are collected and processed. Additionally, a significant portion of the fragments recovered from the impact test are needle-like and/or flat plate-like fragments where their heights are almost negligible in comparison to their other dimensions. As a result, two fragment size categories were defined: 2D objects and 3D objects. While measurement systems are commercially available, factors such as measurement rates, system adaptability, size characterization limitations and equipment costs presented significant challenges to the project and a decision was made to develop our own size characterization systems. The size characterization systems consist of two automated image systems, one referred to as the 3D imaging system and the other as the 2D imaging system. Which imaging system to use depends on the classification of the fragment being measured. Both imaging systems utilize point-and-shoot cameras for object image acquisition and create representative point clouds of the fragments. The 3D imaging system utilizes a space-carving algorithm to generate a 3D point cloud, while the 2D imaging system utilizes an edge detection algorithm to generate a 2D point cloud. From the point clouds, the three largest orthogonal dimensions are determined using a convex hull algorithm. For 3D objects, in addition to the three largest orthogonal dimensions, the volume is computed via an alpha-shape algorithm applied to the point clouds. The average cross-sectional area is also computed for 3D objects. Both imaging systems have automated size measurements (image acquisition and image processing) driven by the need to quickly and accurately measure tens of thousands of debris fragments. Moreover, the automated size measurement reduces potential fragment damage/mishandling and ability for accuracy and repeatability. As the fragment characterization progressed, it became evident that the imaging systems had to be revised. For example, an additional view was added to the 2D imaging system to capture the height of the 2D object. This paper presents the DebriSat project's imaging systems and calculation techniques in detail; from design and development to maturation. The experiences and challenges are also shared.

  11. LiDAR Point Cloud and Stereo Image Point Cloud Fusion

    DTIC Science & Technology

    2013-09-01

    LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as

  12. Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.

    2016-12-01

    Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  14. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  15. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  16. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  17. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  18. Effectiveness and limitations of parameter tuning in reducing biases of top-of-atmosphere radiation and clouds in MIROC version 5

    NASA Astrophysics Data System (ADS)

    Ogura, Tomoo; Shiogama, Hideo; Watanabe, Masahiro; Yoshimori, Masakazu; Yokohata, Tokuta; Annan, James D.; Hargreaves, Julia C.; Ushigami, Naoto; Hirota, Kazuya; Someya, Yu; Kamae, Youichi; Tatebe, Hiroaki; Kimoto, Masahide

    2017-12-01

    This study discusses how much of the biases in top-of-atmosphere (TOA) radiation and clouds can be removed by parameter tuning in the present-day simulation of a climate model in the Coupled Model Inter-comparison Project phase 5 (CMIP5) generation. We used output of a perturbed parameter ensemble (PPE) experiment conducted with an atmosphere-ocean general circulation model (AOGCM) without flux adjustment. The Model for Interdisciplinary Research on Climate version 5 (MIROC5) was used for the PPE experiment. Output of the PPE was compared with satellite observation data to evaluate the model biases and the parametric uncertainty of the biases with respect to TOA radiation and clouds. The results indicate that removing or changing the sign of the biases by parameter tuning alone is difficult. In particular, the cooling bias of the shortwave cloud radiative effect at low latitudes could not be removed, neither in the zonal mean nor at each latitude-longitude grid point. The bias was related to the overestimation of both cloud amount and cloud optical thickness, which could not be removed by the parameter tuning either. However, they could be alleviated by tuning parameters such as the maximum cumulus updraft velocity at the cloud base. On the other hand, the bias of the shortwave cloud radiative effect in the Arctic was sensitive to parameter tuning. It could be removed by tuning such parameters as albedo of ice and snow both in the zonal mean and at each grid point. The obtained results illustrate the benefit of PPE experiments which provide useful information regarding effectiveness and limitations of parameter tuning. Implementing a shallow convection parameterization is suggested as a potential measure to alleviate the biases in radiation and clouds.

  19. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    NASA Astrophysics Data System (ADS)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z < 3.2km) as well the more detailed level perspective of clouds (40 levels from 0 to 19km). Results show that in most models an increase of the SST leads to a decrease of the low-layer cloud fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental parameters.

  20. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  1. Validation Ice Crystal Icing Engine Test in the Propulsion Systems Laboratory at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Oliver, Michael J.

    2014-01-01

    The Propulsion Systems Laboratory (PSL) is an existing altitude simulation jet engine test facility located at NASA Glenn Research Center in Cleveland, OH. It was modified in 2012 with the integration of an ice crystal cloud generation system. This paper documents the inaugural ice crystal cloud test in PSL--the first ever full scale, high altitude ice crystal cloud turbofan engine test to be conducted in a ground based facility. The test article was a Lycoming ALF502-R5 high bypass turbofan engine, serial number LF01. The objectives of the test were to validate the PSL ice crystal cloud calibration and engine testing methodologies by demonstrating the capability to calibrate and duplicate known flight test events that occurred on the same LF01 engine and to generate engine data to support fundamental and computational research to investigate and better understand the physics of ice crystal icing in a turbofan engine environment while duplicating known revenue service events and conducting test points while varying facility and engine parameters. During PSL calibration testing it was discovered than heated probes installed through tunnel sidewalls experienced ice buildup aft of their location due to ice crystals impinging upon them, melting and running back. Filtered city water was used in the cloud generation nozzle system to provide ice crystal nucleation sites. This resulted in mineralization forming on flow path hardware that led to a chronic degradation of performance during the month long test. Lacking internal flow path cameras, the response of thermocouples along the flow path was interpreted as ice building up. Using this interpretation, a strong correlation between total water content (TWC) and a weaker correlation between median volumetric diameter (MVD) of the ice crystal cloud and the rate of ice buildup along the instrumented flow path was identified. For this test article the engine anti-ice system was required to be turned on before ice crystal icing would occur. The ice crystal icing event, an uncommanded reduction in thrust, was able to be turned on and off by manipulating cloud TWC. A flight test point where no ice crystal icing event occurred was also duplicated in PSL. Physics based computational tools were successfully used to predict tunnel settings to induce ice buildup along the low pressure compression system flow path for several test points at incrementally lower altitudes, demonstrating that development of ice crystal icing scaling laws is potentially feasible. Analysis of PSL test data showed that uncommanded reduction in thrust occurs during ice crystal cloud on operation prior to fan speed reduction. This supports previous findings that the reduction of thrust for this test article is due to ice buildup leading to a restricted airflow from either physical or aerodynamic blockage in the engine core flow path.

  2. Validation Ice Crystal Icing Engine Test in the Propulsion Systems Laboratory at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Oliver, Michael J.

    2014-01-01

    The Propulsion Systems Laboratory (PSL) is an existing altitude simulation jet engine test facility located at NASA Glenn Research Center in Clevleand, OH. It was modified in 2012 with the integration of an ice crystal cloud generation system. This paper documents the inaugural ice crystal cloud test in PSLthe first ever full scale, high altitude ice crystal cloud turbofan engine test to be conducted in a ground based facility. The test article was a Lycoming ALF502-R5 high bypass turbofan engine, serial number LF01. The objectives of the test were to validate the PSL ice crystal cloud calibration and engine testing methodologies by demonstrating the capability to calibrate and duplicate known flight test events that occurred on the same LF01 engine and to generate engine data to support fundamental and computational research to investigate and better understand the physics of ice crystal icing in a turbofan engine environment while duplicating known revenue service events and conducting test points while varying facility and engine parameters. During PSL calibration testing it was discovered than heated probes installed through tunnel sidewalls experienced ice buildup aft of their location due to ice crystals impinging upon them, melting and running back. Filtered city water was used in the cloud generation nozzle system to provide ice crystal nucleation sites. This resulted in mineralization forming on flow path hardware that led to a chronic degradation of performance during the month long test. Lacking internal flow path cameras, the response of thermocouples along the flow path was interpreted as ice building up. Using this interpretation, a strong correlation between total water content (TWC) and a weaker correlation between median volumetric diameter (MVD) of the ice crystal cloud and the rate of ice buildup along the instrumented flow path was identified. For this test article the engine anti-ice system was required to be turned on before ice crystal icing would occur. The ice crystal icing event, an uncommanded reduction in thrust, was able to be turned on and off by manipulating cloud TWC. A flight test point where no ice crystal icing event occurred was also duplicated in PSL. Physics based computational tools were successfully used to predict tunnel settings to induce ice buildup along the low pressure compression system flow path for several test points at incrementally lower altitudes, demonstrating that development of ice crystal icing scaling laws is potentially feasible. Analysis of PSL test data showed that uncommanded reduction in thrust occurs during ice crystal cloud on operation prior to fan speed reduction. This supports previous findings that the reduction of thrust for this test article is due to ice buildup leading to a restricted airflow from either physical or aerodynamic blockage in the engine core flow path.

  3. Spectral pattern classification in lidar data for rock identification in outcrops.

    PubMed

    Campos Inocencio, Leonardo; Veronez, Mauricio Roberto; Wohnrath Tognoli, Francisco Manoel; de Souza, Marcelo Kehl; da Silva, Reginaldo Macedônio; Gonzaga, Luiz; Blum Silveira, César Leonardo

    2014-01-01

    The present study aimed to develop and implement a method for detection and classification of spectral signatures in point clouds obtained from terrestrial laser scanner in order to identify the presence of different rocks in outcrops and to generate a digital outcrop model. To achieve this objective, a software based on cluster analysis was created, named K-Clouds. This software was developed through a partnership between UNISINOS and the company V3D. This tool was designed to begin with an analysis and interpretation of a histogram from a point cloud of the outcrop and subsequently indication of a number of classes provided by the user, to process the intensity return values. This classified information can then be interpreted by geologists, to provide a better understanding and identification from the existing rocks in the outcrop. Beyond the detection of different rocks, this work was able to detect small changes in the physical-chemical characteristics of the rocks, as they were caused by weathering or compositional changes.

  4. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    NASA Astrophysics Data System (ADS)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  5. A new mosaic method for three-dimensional surface

    NASA Astrophysics Data System (ADS)

    Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun

    2011-08-01

    Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.

  6. A Concise Guide to Feature Histograms with Applications to LIDAR-Based Spacecraft Relative Navigation

    NASA Astrophysics Data System (ADS)

    Rhodes, Andrew P.; Christian, John A.; Evans, Thomas

    2017-12-01

    With the availability and popularity of 3D sensors, it is advantageous to re-examine the use of point cloud descriptors for the purpose of pose estimation and spacecraft relative navigation. One popular descriptor is the oriented unique repeatable clustered viewpoint feature histogram (OUR-CVFH), which is most often utilized in personal and industrial robotics to simultaneously recognize and navigate relative to an object. Recent research into using the OUR-CVFH descriptor for spacecraft navigation has produced favorable results. Since OUR-CVFH is the most recent innovation in a large family of feature histogram point cloud descriptors, discussions of parameter settings and insights into its functionality are spread among various publications and online resources. This paper organizes the history of feature histogram point cloud descriptors for a straightforward explanation of their evolution. This article compiles all the requisite information needed to implement OUR-CVFH into one location, as well as providing useful suggestions on how to tune the generation parameters. This work is beneficial for anyone interested in using this histogram descriptor for object recognition or navigation - may it be personal robotics or spacecraft navigation.

  7. Dem Generation from Close-Range Photogrammetry Using Extended Python Photogrammetry Toolbox

    NASA Astrophysics Data System (ADS)

    Belmonte, A. A.; Biong, M. M. P.; Macatulad, E. G.

    2017-10-01

    Digital elevation models (DEMs) are widely used raster data for different applications concerning terrain, such as for flood modelling, viewshed analysis, mining, land development, engineering design projects, to name a few. DEMs can be obtained through various methods, including topographic survey, LiDAR or photogrammetry, and internet sources. Terrestrial close-range photogrammetry is one of the alternative methods to produce DEMs through the processing of images using photogrammetry software. There are already powerful photogrammetry software that are commercially-available and can produce high-accuracy DEMs. However, this entails corresponding cost. Although, some of these software have free or demo trials, these trials have limits in their usable features and usage time. One alternative is the use of free and open-source software (FOSS), such as the Python Photogrammetry Toolbox (PPT), which provides an interface for performing photogrammetric processes implemented through python script. For relatively small areas such as in mining or construction excavation, a relatively inexpensive, fast and accurate method would be advantageous. In this study, PPT was used to generate 3D point cloud data from images of an open pit excavation. The PPT was extended to add an algorithm converting the generated point cloud data into a usable DEM.

  8. Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas

    NASA Astrophysics Data System (ADS)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2016-06-01

    We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.

  9. a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud

    NASA Astrophysics Data System (ADS)

    Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.

    2018-04-01

    Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  10. Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds

    PubMed Central

    Spits, Christine; Wallace, Luke; Reinke, Karin

    2017-01-01

    Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957

  11. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  12. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  13. Investigation of unsteadiness in Shock-particle cloud interaction: Fully resolved two-dimensional simulation and one-dimensional modeling

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.

    2015-11-01

    Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.

  14. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.

    2012-07-01

    Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but in addition with radiometric information for textures. The discussion in this paper reviews recording and important processing steps as geo-referencing and data merging, the essential assessment of the results, and examples of deliverables from projects of the Photogrammetry and Geomatics Group (INSA Strasbourg, France).

  15. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  16. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  17. Patient-specific atrium models for training and pre-procedure surgical planning

    NASA Astrophysics Data System (ADS)

    Laing, Justin; Moore, John; Bainbridge, Daniel; Drangova, Maria; Peters, Terry

    2017-03-01

    Minimally invasive cardiac procedures requiring a trans-septal puncture such as atrial ablation and MitraClip® mitral valve repair are becoming increasingly common. These procedures are performed on the beating heart, and require clinicians to rely on image-guided techniques. For cases of complex or diseased anatomy, in which fluoroscopic and echocardiography images can be difficult to interpret, clinicians may benefit from patient-specific atrial models that can be used for training, surgical planning, and the validation of new devices and guidance techniques. Computed tomography (CT) images of a patient's heart were segmented and used to generate geometric models to create a patient-specific atrial phantom. Using rapid prototyping, the geometric models were converted into physical representations and used to build a mold. The atria were then molded using tissue-mimicking materials and imaged using CT. The resulting images were segmented and used to generate a point cloud data set that could be registered to the original patient data. The absolute distance of the two point clouds was compared and evaluated to determine the model's accuracy. The result when comparing the molded model point cloud to the original data set, resulted in a maximum Euclidean distance error of 4.5 mm, an average error of 0.5 mm and a standard deviation of 0.6 mm. Using our workflow for creating atrial models, potential complications, particularly for complex repairs, may be accounted for in pre-operative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.

  18. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.

  19. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  20. Integration of MODIS Snow, Cloud and Land Area Coverage Data with SNOTEL to Generate Inter-Annual and Within-Season Snow Depletion Curves and Maps

    NASA Astrophysics Data System (ADS)

    Qualls, R. J.; Woodruff, C.

    2017-12-01

    The behavior of inter-annual trends in mountain snow cover would represent extremely useful information for drought and climate change assessment; however, individual data sources exhibit specific limitations for characterizing this behavior. For example, SNOTEL data provide time series point values of Snow Water Equivalent (SWE), but lack spatial content apart from that contained in a sparse network of point values. Satellite observations in the visible spectrum can provide snow covered area, but not SWE at present, and are limited by cloud cover which often obscures visibility of the ground, especially during the winter and spring in mountainous areas. Cloud cover, therefore, often limits both temporal and spatial coverage of satellite remote sensing of snow. Among the platforms providing the best combination of temporal and spatial coverage to overcome the cloud obscuration problem by providing frequent overflights, the Aqua and Terra satellites carrying the MODIS instrument package provide 500 m, daily resolution observations of snow cover. These were only launched in 1999 and the early 2000's, thus limiting the historical period over which these data are available. A hybrid method incorporating SNOTEL and MODIS data has been developed which accomplishes cloud removal, and enables determination of the time series of watershed spatial snow cover when either SNOTEL or MODIS data are available. This allows one to generate spatial snow cover information for watersheds with SNOTEL stations for periods both before and after the launch of the Aqua and Terra satellites, extending the spatial information about snow cover over the period of record of the SNOTEL stations present in a watershed. This method is used to quantify the spatial time series of snow over the 9000 km2 Upper Snake River watershed and to evaluate inter-annual trends in the timing, rate, and duration of melt over the nearly 40 year period from the early 1980's to the present, and shows promise for generating snow cover depletion maps for drought and climate change scenarios.

  1. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.

  2. From the air to digital landscapes: generating reach-scale topographic models from aerial photography in gravel-bed rivers

    NASA Astrophysics Data System (ADS)

    Vericat, Damià; Narciso, Efrén; Béjar, Maria; Tena, Álvaro; Brasington, James; Gibbins, Chris; Batalla, Ramon J.

    2014-05-01

    Digital Terrain Models are fundamental to characterise landscapes, to support numerical modelling and to monitor topographic changes. Recent advances in topography, remote sensing and geomatics are providing new opportunities to obtain high density/quality and rapid topographic data. In this paper we present an integrated methodology to rapidly obtain reach scale topographic models of fluvial systems. This methodology has been tested and is being applied to develop event-scale terrain models of a 11-km river reach in the highly dynamic Upper Cinca (NE Iberian Peninsula). This research is conducted in the background of the project MorphSed. The methodology integrates (a) the acquisition of dense point clouds of the exposed floodplain (aerial photography and digital photogrammetry); (b) the registration of all observations to the same coordinate system (using RTK-GPS surveyed GCPs); (c) the acquisition of bathymetric data (using aDcp measurements integrated with RTK-GPS); (d) the intelligent decimation of survey observations (using the open source TopCat toolkit) and, finally, (e) data fusion (elaborating Digital Elevation Models). In this paper special emphasis is given to the acquisition and registration of point clouds. 3D point clouds are obtained from aerial photography and by means of automated digital photogrammetry. Aerial photographs are taken at 275 meters above the ground by means of a SLR digital camera manually operated from an autogyro. Four flight paths are defined in order to cover the 11 km long and 500 meters wide river reach. A total of 45 minutes are required to fly along these paths. Camera has been previously calibrated with the objective to ensure image resolution at around 5 cm. A total of 220 GCPs are deployed and RTK-GPS surveyed before the flight is conducted. Two people and one full workday are necessary to deploy and survey the full set of GCPs. Field data acquisition may be finalised in less than 2 days. Structure-from-Motion is subsequently applied in the lab using Agisoft PhotoScan, photographs are aligned and a 3d point cloud is generated. GCPs are used to geo-register all point clouds. This task may be time consuming since GCPs need to be identified in at least two of the pictures. A first automatic identification of GCPs positions is performed in the rest of the photos, although user supervision is necessary. Preliminary results show as geo-registration errors between 0.08 and and 0.10 meters can be obtained. The number of GCPs is being degraded and the quality of the point cloud assessed based on check points (the extracted GCPs). A critical analysis of GCPs density and scene locations is being performed (results in preparation). The results show that automated digital photogrammetry may provide new opportunities in the acquisition of topographic data at multiple temporal and spatial scales, being competitive with other more expensive techniques that, in turn, may require much more time to acquire field observations. SfM offers new opportunities to develop event-scale terrain models of fluvial systems suitable for hydraulic modelling and to study topographic change in highly dynamic environments.

  3. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  4. Pointo - a Low Cost Solution to Point Cloud Processing

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.

  5. Multiseasonal Tree Crown Structure Mapping with Point Clouds from OTS Quadrocopter Systems

    NASA Astrophysics Data System (ADS)

    Hese, S.; Behrendt, F.

    2017-08-01

    OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) - the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500 €. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.

  6. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  7. Close Range Photogrammetry Applied to the Documentation of AN Archaeological Site in Gaza Strip, Palestine

    NASA Astrophysics Data System (ADS)

    Alby, E.; Elter, R.; Ripoche, C.; Quere, N.; de Strasbourg, INSA

    2013-07-01

    In a geopolitical very complex context as the Gaza Strip it has to be dealt with an enhancement of an archaeological site. This site is the monastery of St. Hilarion. To enable a cultural appropriation of a place with several identified phases of occupation must undertake extensive archaeological excavation. Excavate in this geographical area is to implement emergency excavations, so the aim of such a project can be questioned for each mission. Real estate pressure is also a motivating setting the documentation because the large population density does not allow systematic studies of underground before construction projects. This is also during the construction of a road that the site was discovered. Site dimensions are 150 m by 80 m. It is located on a sand dune, 300 m from the sea. To implement the survey, four different levels of detail have been defined for terrestrial photogrammetry. The first level elements are similar to objects, capitals, fragment of columns, tiles for example. Modeling of small objects requires the acquisition of very dense point clouds (density: 1 point / 1 mm on average). The object must then be a maximum area of the sensor of the camera, while retaining in the field of view a reference pattern for the scaling of the point cloud generated. The pictures are taken at a short distance from the object, using the images at full resolution. The main obstacle to the modeling of objects is the presence of noise partly due to the studied materials (sand, smooth rock), which do not favor the detection of points of interest quality. Pretreatments of the cloud will be achieved meticulously since the ouster of points on a surface of a small object results in the formation of a hole with a lack of information, useful to resulting mesh. Level 2 focuses on the stratigraphic units such as mosaics. The monastery of St. Hilarion identifies thirteen floors of which has been documented years ago by silver photographs, scanned later. Modeling of pavements is to obtain a three-dimensional model of the mosaic in particular to analyze the subsidence, which it may be subjected. The dense point cloud can go beyond by including the geometric shapes of the pavement. The calculation mesh using high-density point cloud colorization allows cloud sufficient to final rendering. Levels 3 and 4 will allow the survey and representation of loci and sectors. Their modeling can be done by colored mesh or textured by a generic pattern but also by geometric primitives. This method requires the segmentation simple geometrical elements and creates a surface geometry by analysis of the sample points. Statistical tools allow the extraction plans meet the requirements of the operator can monitor quantitatively the quality of the final rendering. Each level has constraints on the accuracy of survey and types of representation especially from the point clouds, which are detailed in the complete article.

  8. Study of Huizhou architecture component point cloud in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin

    2017-06-01

    Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.

  9. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  10. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  11. A graph signal filtering-based approach for detection of different edge types on airborne lidar data

    NASA Astrophysics Data System (ADS)

    Bayram, Eda; Vural, Elif; Alatan, Aydin

    2017-10-01

    Airborne Laser Scanning is a well-known remote sensing technology, which provides a dense and highly accurate, yet unorganized point cloud of earth surface. During the last decade, extracting information from the data generated by airborne LiDAR systems has been addressed by many studies in geo-spatial analysis and urban monitoring applications. However, the processing of LiDAR point clouds is challenging due to their irregular structure and 3D geometry. In this study, we propose a novel framework for the detection of the boundaries of an object or scene captured by LiDAR. Our approach is motivated by edge detection techniques in vision research and it is established on graph signal filtering which is an exciting and promising field of signal processing for irregular data types. Due to the convenient applicability of graph signal processing tools on unstructured point clouds, we achieve the detection of the edge points directly on 3D data by using a graph representation that is constructed exclusively to answer the requirements of the application. Moreover, considering the elevation data as the (graph) signal, we leverage aerial characteristic of the airborne LiDAR data. The proposed method can be employed both for discovering the jump edges on a segmentation problem and for exploring the crease edges on a LiDAR object on a reconstruction/modeling problem, by only adjusting the filter characteristics.

  12. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  13. Rapid mapping of ultrafine fault zone topography with structure from motion

    USGS Publications Warehouse

    Johnson, Kendra; Nissen, Edwin; Saripalli, Srikanth; Arrowsmith, J. Ramón; McGarey, Patrick; Scharer, Katherine M.; Williams, Patrick; Blisniuk, Kimberly

    2014-01-01

    Structure from Motion (SfM) generates high-resolution topography and coregistered texture (color) from an unstructured set of overlapping photographs taken from varying viewpoints, overcoming many of the cost, time, and logistical limitations of Light Detection and Ranging (LiDAR) and other topographic surveying methods. This paper provides the first investigation of SfM as a tool for mapping fault zone topography in areas of sparse or low-lying vegetation. First, we present a simple, affordable SfM workflow, based on an unmanned helium balloon or motorized glider, an inexpensive camera, and semiautomated software. Second, we illustrate the system at two sites on southern California faults covered by existing airborne or terrestrial LiDAR, enabling a comparative assessment of SfM topography resolution and precision. At the first site, an ∼0.1 km2 alluvial fan on the San Andreas fault, a colored point cloud of density mostly >700 points/m2 and a 3 cm digital elevation model (DEM) and orthophoto were produced from 233 photos collected ∼50 m above ground level. When a few global positioning system ground control points are incorporated, closest point vertical distances to the much sparser (∼4 points/m2) airborne LiDAR point cloud are mostly 530 points/m2 and a 2 cm DEM and orthophoto were produced from 450 photos taken from ∼60 m above ground level. Closest point vertical distances to existing terrestrial LiDAR data of comparable density are mostly <6 cm. Each SfM survey took ∼2 h to complete and several hours to generate the scene topography and texture. SfM greatly facilitates the imaging of subtle geomorphic offsets related to past earthquakes as well as rapid response mapping or long-term monitoring of faulted landscapes.

  14. Measurement and reconstruction of the leaflet geometry for a pericardial artificial heart valve.

    PubMed

    Jiang, Hongjun; Campbell, Gord; Xi, Fengfeng

    2005-03-01

    This paper describes the measurement and reconstruction of the leaflet geometry for a pericardial heart valve. Tasks involved include mapping the leaflet geometries by laser digitizing and reconstructing the 3D freeform leaflet surface based on a laser scanned profile. The challenge is to design a prosthetic valve that maximizes the benefits offered to the recipient as compared to the normally operating naturally-occurring valve. This research was prompted by the fact that artificial heart valve bioprostheses do not provide long life durability comparable to the natural heart valve, together with the anticipated benefits associated with defining the valve geometries, especially the leaflet geometries for the bioprosthetic and human valves, in order to create a replicate valve fabricated from synthetic materials. Our method applies the concept of reverse engineering in order to reconstruct the freeform surface geometry. A Brown & Shape coordinate measuring machine (CMM) equipped with a HyMARC laser-digitizing system was used to measure the leaflet profiles of a Baxter Carpentier-Edwards pericardial heart valve. The computer software, Polyworks was used to pre-process the raw data obtained from the scanning, which included merging images, eliminating duplicate points, and adding interpolated points. Three methods, creating a mesh model from cloud points, creating a freeform surface from cloud points, and generating a freeform surface by B-splines are presented in this paper to reconstruct the freeform leaflet surface. The mesh model created using Polyworks can be used for rapid prototyping and visualization. To fit a freeform surface to cloud points is straightforward but the rendering of a smooth surface is usually unpredictable. A surface fitted by a group of B-splines fitted to cloud points was found to be much smoother. This method offers the possibility of manually adjusting the surface curvature, locally. However, the process is complex and requires additional manipulation. Finally, this paper presents a reverse engineered design for the pericardial heart valve which contains three identical leaflets with reconstructed geometry.

  15. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  16. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  17. Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Wimmer, Michael

    2016-02-01

    With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.

  18. Controlled generation of large volumes of atmospheric clouds in a ground-based environmental chamber

    NASA Technical Reports Server (NTRS)

    Hettel, H. J.; Depena, R. G.; Pena, J. A.

    1975-01-01

    Atmospheric clouds were generated in a 23,000 cubic meter environmental chamber as the first step in a two part study on the effects of contaminants on cloud formation. The generation procedure was modeled on the terrestrial generation mechanism so that naturally occurring microphysics mechanisms were operative in the cloud generation process. Temperature, altitude, liquid water content, and convective updraft velocity could be selected independently over the range of terrestrially realizable clouds. To provide cloud stability, a cotton muslin cylinder 29.3 meters in diameter and 24.2 meters high was erected within the chamber and continuously wetted with water at precisely the same temperature as the cloud. The improved instrumentation which permitted fast, precise, and continual measurements of cloud temperature and liquid water content is described.

  19. Smart Point Cloud: Definition and Remaining Challenges

    NASA Astrophysics Data System (ADS)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  20. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  1. Solubilization of phenanthrene above cloud point of Brij 30: a new application in biodegradation.

    PubMed

    Pantsyrnaya, T; Delaunay, S; Goergen, J L; Guseva, E; Boudrant, J

    2013-06-01

    In the present study a new application of solubilization of phenanthrene above cloud point of Brij 30 in biodegradation was developed. It was shown that a temporal solubilization of phenanthrene above cloud point of Brij 30 (5wt%) permitted to obtain a stable increase of the solubility of phenanthrene even when the temperature was decreased to culture conditions of used microorganism Pseudomonas putida (28°C). A higher initial concentration of soluble phenanthrene was obtained after the cloud point treatment: 200 against 120μM without treatment. All soluble phenanthrene was metabolized and a higher final concentration of its major metabolite - 1-hydroxy-2-naphthoic acid - (160 against 85μM) was measured in the culture medium in the case of a preliminary cloud point treatment. Therefore a temporary solubilization at cloud point might have a perspective application in the enhancement of biodegradation of polycyclic aromatic hydrocarbons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A portable low-cost 3D point cloud acquiring method based on structure light

    NASA Astrophysics Data System (ADS)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  3. Comparing Networks from a Data Analysis Perspective

    NASA Astrophysics Data System (ADS)

    Li, Wei; Yang, Jing-Yu

    To probe network characteristics, two predominant ways of network comparison are global property statistics and subgraph enumeration. However, they suffer from limited information and exhaustible computing. Here, we present an approach to compare networks from the perspective of data analysis. Initially, the approach projects each node of original network as a high-dimensional data point, and the network is seen as clouds of data points. Then the dispersion information of the principal component analysis (PCA) projection of the generated data clouds can be used to distinguish networks. We applied this node projection method to the yeast protein-protein interaction networks and the Internet Autonomous System networks, two types of networks with several similar higher properties. The method can efficiently distinguish one from the other. The identical result of different datasets from independent sources also indicated that the method is a robust and universal framework.

  4. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras

    NASA Astrophysics Data System (ADS)

    Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd

    2017-10-01

    This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.

  5. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  6. Point clouds segmentation as base for as-built BIM creation

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2015-08-01

    In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.

  7. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.

    PubMed

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-12-30

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  8. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    PubMed Central

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-01-01

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846

  9. Biotoxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system.

    PubMed

    Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei

    2017-06-01

    A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.

  10. Cloud Size Distributions from Multi-sensor Observations of Shallow Cumulus Clouds

    NASA Astrophysics Data System (ADS)

    Kleiss, J.; Riley, E.; Kassianov, E.; Long, C. N.; Riihimaki, L.; Berg, L. K.

    2017-12-01

    Combined radar-lidar observations have been used for almost two decades to document temporal changes of shallow cumulus clouds at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Facility's Southern Great Plains (SGP) site in Oklahoma, USA. Since the ARM zenith-pointed radars and lidars have a narrow field-of-view (FOV), the documented cloud statistics, such as distributions of cloud chord length (or horizontal length scale), represent only a slice along the wind direction of a region surrounding the SGP site, and thus may not be representative for this region. To investigate this impact, we compare cloud statistics obtained from wide-FOV sky images collected by ground-based observations at the SGP site to those from the narrow FOV active sensors. The main wide-FOV cloud statistics considered are cloud area distributions of shallow cumulus clouds, which are frequently required to evaluate model performance, such as routine large eddy simulation (LES) currently being conducted by the ARM LASSO (LES ARM Symbiotic Simulation and Observation) project. We obtain complementary macrophysical properties of shallow cumulus clouds, such as cloud chord length, base height and thickness, from the combined radar-lidar observations. To better understand the broader observational context where these narrow FOV cloud statistics occur, we compare them to collocated and coincident cloud area distributions from wide-FOV sky images and high-resolution satellite images. We discuss the comparison results and illustrate the possibility to generate a long-term climatology of cloud size distributions from multi-sensor observations at the SGP site.

  11. Cliff Collapse Hazard from Repeated Multicopter Uav Acquisitions: Return on Experience

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Leroux, J.; Morelli, S.

    2016-06-01

    Cliff collapse poses a serious hazard to infrastructure and passers-by. Obtaining information such as magnitude-frequency relationship for a specific site is of great help to adapt appropriate mitigation measures. While it is possible to monitor hundreds-of-meter-long cliff sites with ground based techniques (e.g. lidar or photogrammetry), it is both time consuming and scientifically limiting to focus on short cliff sections. In the project SUAVE, we sought to investigate whether an octocopter UAV photogrammetric survey would perform sufficiently well in order to repeatedly survey cliff face geometry and derive rock fall inventories amenable to probabilistic rock fall hazard computation. An experiment was therefore run on a well-studied site of the chalk coast of Normandy, in Mesnil Val, along the English Channel (Northern France). Two campaigns were organized in January and June 2015 which surveyed about 60 ha of coastline, including the 80-m-high cliff face, the chalk platform at its foot, and the hinterland in a matter of 4 hours from start to finish. To conform with UAV regulations, the flight was flown in 3 legs for a total of about 30 minutes in the air. A total of 868 and 1106 photos were respectively shot with a Sony NEX 7 with fixed focal 16mm. Three lines of sight were combined: horizontal shots for cliff face imaging, 45°-oblique views to tie plateau/platform photos with cliff face images, and regular vertical shots. Photogrammetrically derived dense point clouds were produced with Agisoft Photoscan at ultra-high density (median density is 1 point every 1.7cm). Point cloud density proved a critical parameter to reproduce faithfully the chalk face's geometry. Tuning down the density parameter to "high" or "medium", though efficient from a computational point of view, generated artefacts along chalk bed edges (i.e. smoothing the sharp gradient) and ultimately creating ghost volumes when computing cloud to cloud differences. Yet, from a hazard point of view, this is where small rock fall will most likely occur. Absolute orientation of both point clouds proved unsufficient despite the 30 black and white quadrants ground control point DGPS surveyed. Additional ICP was necessary to reach centimeter-level accuracy and segment rock fall scars corresponding to the expected average daily rock fall volume (ca. 0.013 m3).

  12. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation.

    PubMed

    Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I

    2017-07-01

    Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.

  13. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  14. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  15. Above-bottom biomass retrieval of aquatic plants with regression models and SfM data acquired by a UAV platform - A case study in Wild Duck Lake Wetland, Beijing, China

    NASA Astrophysics Data System (ADS)

    Jing, Ran; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Deng, Lei

    2017-12-01

    Above-bottom biomass (ABB) is considered as an important parameter for measuring the growth status of aquatic plants, and is of great significance for assessing health status of wetland ecosystems. In this study, Structure from Motion (SfM) technique was used to rebuild the study area with high overlapped images acquired by an unmanned aerial vehicle (UAV). We generated orthoimages and SfM dense point cloud data, from which vegetation indices (VIs) and SfM point cloud variables including average height (HAVG), standard deviation of height (HSD) and coefficient of variation of height (HCV) were extracted. These VIs and SfM point cloud variables could effectively characterize the growth status of aquatic plants, and thus they could be used to develop a simple linear regression model (SLR) and a stepwise linear regression model (SWL) with field measured ABB samples of aquatic plants. We also utilized a decision tree method to discriminate different types of aquatic plants. The experimental results indicated that (1) the SfM technique could effectively process high overlapped UAV images and thus be suitable for the reconstruction of fine texture feature of aquatic plant canopy structure; and (2) an SWL model based on point cloud variables: HAVG, HSD, HCV and two VIs: NGRDI, ExGR as independent variables has produced the best predictive result of ABB of aquatic plants in the study area, with a coefficient of determination of 0.84 and a relative root mean square error of 7.13%. In this analysis, a novel method for the quantitative inversion of a growth parameter (i.e., ABB) of aquatic plants in wetlands was demonstrated.

  16. An evaluation of the bioaccessibility of arsenic in corn and rice samples based on cloud point extraction and hydride generation coupled to atomic fluorescence spectrometry.

    PubMed

    Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor

    2016-08-01

    A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Impact of Surface Active Ionic Liquids on the Cloud Points of Nonionic Surfactants and the Formation of Aqueous Micellar Two-Phase Systems.

    PubMed

    Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P

    2017-09-21

    Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP < 1 / 2 ) lead to the formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.

  18. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  19. Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software

    NASA Astrophysics Data System (ADS)

    Costantino, D.; Angelini, M. G.; Settembrini, F.

    2017-05-01

    The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.

  20. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  1. Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface

    PubMed Central

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467

  2. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    PubMed

    Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping

    2014-01-01

    The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  3. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  4. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    NASA Astrophysics Data System (ADS)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  5. Evaluating cloudiness in an AGCM with Cloud Vertical Structure classes and their radiative effects

    NASA Astrophysics Data System (ADS)

    Lee, D.; Cho, N.; Oreopoulos, L.; Barahona, D.

    2017-12-01

    Clouds are recognized not only as the main modulator of Earth's Radiation Budget but also as the atmospheric constituent carrying the largest uncertainty in future climate projections. The presentation will showcase a new framework for evaluating clouds and their radiative effects in Atmospheric Global Climate Models (AGCMs) using Cloud Vertical Structure (CVS) classes. We take advantage of a new CVS reference dataset recently created from CloudSat's 2B-CLDCLASS-LIDAR product and which assigns observed cloud vertical configurations to nine simplified CVS classes based on cloud co-occurrence in three standard atmospheric layers. These CVS classes can also be emulated in GEOS-5 using the subcolumn cloud generator currently paired with the RRTMG radiation package as an implementation of the McICA scheme. Comparisons between the observed and modeled climatologies of the frequency of occurrence of the various CVS classes provide a new vantage point for assessing the realism of GEOS-5 clouds. Furthermore, a comparison between observed and modeled cloud radiative effects according to their CVS is also possible thanks to the availability of CloudSat's 2B-FLXHR-LIDAR product and our ability to composite radiative fluxes by CVS class - both in the observed and modeled realm. This latter effort enables an investigation of whether the contribution of the various CVS classes to the Earth's radiation budget is represented realistically in GEOS-5. Making this new pathway of cloud evaluation available to the community is a major step towards the improved representation of clouds in climate models.

  6. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  7. Quality assessment and improvement of the EUMETSAT Meteosat Surface Albedo Climate Data Record

    NASA Astrophysics Data System (ADS)

    Lattanzio, A.; Fell, F.; Bennartz, R.; Trigo, I. F.; Schulz, J.

    2015-10-01

    Surface albedo has been identified as an important parameter for understanding and quantifying the Earth's radiation budget. EUMETSAT generated the Meteosat Surface Albedo (MSA) Climate Data Record (CDR) currently comprising up to 24 years (1982-2006) of continuous surface albedo coverage for large areas of the Earth. This CDR has been created within the Sustained, Coordinated Processing of Environmental Satellite Data for Climate Monitoring (SCOPE-CM) framework. The long-term consistency of the MSA CDR is high and meets the Global Climate Observing System (GCOS) stability requirements for desert reference sites. The limitation in quality due to non-removed clouds by the embedded cloud screening procedure is the most relevant weakness in the retrieval process. A twofold strategy is applied to efficiently improve the cloud detection and removal. The first step consists of the application of a robust and reliable cloud mask, taking advantage of the information contained in the measurements of the infrared and visible bands. Due to the limited information available from old radiometers, some clouds can still remain undetected. A second step relies on a post-processing analysis of the albedo seasonal variation together with the usage of a background albedo map in order to detect and screen out such outliers. The usage of a reliable cloud mask has a double effect. It enhances the number of high-quality retrievals for tropical forest areas sensed under low view angles and removes the most frequently unrealistic retrievals on similar surfaces sensed under high view angles. As expected, the usage of a cloud mask has a negligible impact on desert areas where clear conditions dominate. The exploitation of the albedo seasonal variation for cloud removal has good potentialities but it needs to be carefully addressed. Nevertheless it is shown that the inclusion of cloud masking and removal strategy is a key point for the generation of the next MSA CDR release.

  8. Cloud-generated radiative heating and its generation of available potential energy

    NASA Technical Reports Server (NTRS)

    Stuhlmann, R.; Smith, G. L.

    1989-01-01

    The generation of zonal available potential energy (APE) by cloud radiative heating is discussed. The APE concept was mathematically formulated by Lorenz (1955) as a measure of the maximum amount of total potential energy that is available for conversion by adiabatic processes to kinetic energy. The rate of change of APE is the rate of the generation of APE minus the rate of conversion between potential and kinetic energy. By radiative transfer calculations, a mean cloud-generated radiative heating for a well defined set of cloud classes is derived as a function of cloud optical thickness. The formulation is suitable for using a general cloud parameter data set and has the advantage of taking into account nonlinearities between the microphysical and macrophysical cloud properties and the related radiation field.

  9. Spectral signatures of polar stratospheric clouds and sulfate aerosol

    NASA Technical Reports Server (NTRS)

    Massie, S. T.; Bailey, P. L.; Gille, J. C.; Lee, E. C.; Mergenthaler, J. L.; Roche, A. E.; Kumer, J. B.; Fishbein, E. F.; Waters, J. W.; Lahoz, W. A.

    1994-01-01

    Multiwavelength observations of Antarctic and midlatitude aerosol by the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment on the Upper Atmosphere Research Satellite (UARS) are used to demonstrate a technique that identifies the location of polar stratospheric clouds. The technique discussed uses the normalized area of the triangle formed by the aerosol extinctions at 925, 1257, and 1605/cm (10.8, 8.0, and 6.2 micrometers) to derive a spectral aerosol measure M of the aerosol spectrum. Mie calculations for spherical particles and T-matrix calculations for spheriodal particles are used to generate theoretical spectral extinction curves for sulfate and polar stratospheric cloud particles. The values of the spectral aerosol measure M for the sulfate and polar stratospheric cloud particles are shown to be different. Aerosol extinction data, corresponding to temperatures between 180 and 220 K at a pressure of 46 hPa (near 21-km altitude) for 18 August 1992, are used to demonstrate the technique. Thermodynamic calculations, based upon frost-point calculations and laboratory phase-equilibrium studies of nitric acid trihydrate, are used to predict the location of nitric acid trihydrate cloud particles.

  10. Semantic Labelling of Ultra Dense Mls Point Clouds in Urban Road Corridors Based on Fusing Crf with Shape Priors

    NASA Astrophysics Data System (ADS)

    Yao, W.; Polewski, P.; Krzystek, P.

    2017-09-01

    In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.

  11. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  12. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    NASA Astrophysics Data System (ADS)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.

  13. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  14. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  15. Reconstruction of forest geometries from terrestrial laser scanning point clouds for canopy radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Bremer, Magnus; Schmidtner, Korbinian; Rutzinger, Martin

    2015-04-01

    The architecture of forest canopies is a key parameter for forest ecological issues helping to model the variability of wood biomass and foliage in space and time. In order to understand the nature of subpixel effects of optical space-borne sensors with coarse spatial resolution, hypothetical 3D canopy models are widely used for the simulation of radiative transfer in forests. Thereby, radiation is traced through the atmosphere and canopy geometries until it reaches the optical sensor. For a realistic simulation scene we decompose terrestrial laser scanning point cloud data of leaf-off larch forest plots in the Austrian Alps and reconstruct detailed model ready input data for radiative transfer simulations. The point clouds are pre-classified into primitive classes using Principle Component Analysis (PCA) using scale adapted radius neighbourhoods. Elongated point structures are extracted as tree trunks. The tree trunks are used as seeds for a Dijkstra-growing procedure, in order to obtain single tree segmentation in the interlinked canopies. For the optimized reconstruction of branching architectures as vector models, point cloud skeletonisation is used in combination with an iterative Dijkstra-growing and by applying distance constraints. This allows conducting a hierarchical reconstruction preferring the tree trunk and higher order branches and avoiding over-skeletonization effects. Based on the reconstructed branching architectures, larch needles are modelled based on the hierarchical level of branches and the geometrical openness of the canopy. For radiative transfer simulations, branch architectures are used as mesh geometries representing branches as cylindrical pipes. Needles are either used as meshes or as voxel-turbids. The presented workflow allows an automatic classification and single tree segmentation in interlinked canopies. The iterative Dijkstra-growing using distance constraints generated realistic reconstruction results. As the mesh representation of branches proved to be sufficient for the simulation approach, the modelling of huge amounts of needles is much more efficient in voxel-turbid representation.

  16. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously

  17. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  18. Hydrogen axion star: metallic hydrogen bound to a QCD axion BEC

    DOE PAGES

    Bai, Yang; Barger, Vernon; Berger, Joshua

    2016-12-23

    As a cold dark matter candidate, the QCD axion may form Bose-Einstein condensates, called axion stars, with masses around 10 -11M⊙ . In this paper, we point out that a brand new astrophysical object, a Hydrogen Axion Star (HAS), may well be formed by ordinary baryonic matter becoming gravitationally bound to an axion star. Here, we study the properties of the HAS and nd that the hydrogen cloud has a high pressure and temperature in the center and is likely in the liquid metallic hydrogen state. Because of the high particle number densities for both the axion star and themore » hydrogen cloud, the feeble interaction between axion and hydrogen can still generate enough internal power, around 10 13W (m a/=5 meV) 4, to make these objects luminous point sources. Furthermore, high resolution ultraviolet, optical and infrared telescopes can discover HAS via black-body radiation.« less

  19. Hydrogen axion star: metallic hydrogen bound to a QCD axion BEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Yang; Barger, Vernon; Berger, Joshua

    As a cold dark matter candidate, the QCD axion may form Bose-Einstein condensates, called axion stars, with masses around 10 -11M⊙ . In this paper, we point out that a brand new astrophysical object, a Hydrogen Axion Star (HAS), may well be formed by ordinary baryonic matter becoming gravitationally bound to an axion star. Here, we study the properties of the HAS and nd that the hydrogen cloud has a high pressure and temperature in the center and is likely in the liquid metallic hydrogen state. Because of the high particle number densities for both the axion star and themore » hydrogen cloud, the feeble interaction between axion and hydrogen can still generate enough internal power, around 10 13W (m a/=5 meV) 4, to make these objects luminous point sources. Furthermore, high resolution ultraviolet, optical and infrared telescopes can discover HAS via black-body radiation.« less

  20. A Modular Approach to Video Designation of Manipulation Targets for Manipulators

    DTIC Science & Technology

    2014-05-12

    side view of a ray going through a point cloud of a water bottle sitting on the ground. The bottom left image shows the same point cloud after it has...System (ROS), Point Cloud Library (PCL), and OpenRAVE were used to a great extent to help promote reusability of the code developed during this

  1. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  2. Using LIDAR and UAV-derived point clouds to evaluate surface roughness in a gravel-bed braided river (Vénéon river, French Alps)

    NASA Astrophysics Data System (ADS)

    Vázquez Tarrío, Daniel; Borgniet, Laurent; Recking, Alain; Liebault, Frédéric; Vivier, Marie

    2016-04-01

    The present research is focused on the Vénéon river at Plan du Lac (Massif des Ecrins, France), an alpine braided gravel bed stream with a glacio-nival hydrological regime. It drains a catchment area of 316 km2. The present research is focused in a 2.5 km braided reach placed immediately upstream of a small hydropower dam. An airbone LIDAR survey was accomplished in October, 2014 by EDF (the company managing the small hydropower dam), and data coming from this LIDAR survey were available for the present research. Point density of the LIDAR-derived 3D-point cloud was between 20-50 points/m2, with a vertical precision of 2-3 cm over flat surfaces. Moreover, between April and Juin, 2015, we carried out a photogrammetrical campaign based in aerial images taken with an UAV-drone. The UAV-derived point-cloud has a point density of 200-300 points/m2, and a vertical precision over flat control surfaces comparable to that of the LIDAR point cloud (2-3 cm). Simultaneously to the UAV campaign, we took several Wolman samples with the aim of characterizing the grain size distribution of bed sediment. Wolman samples were taken following a geomorphological criterion (unit bars, head/tail of compound bars). Furthermore, some of the Wolman samples were repeated with the aim of defining the uncertainty of our sampling protocol. LIDAR and UAV-derived point clouds were treated in order to check whether both point-clouds were correctly co-aligned. After that, we estimated bed roughness using the detrended standard deviation of heights, in a 40-cm window. For all this data treatment we used CloudCompare. Then, we measured the distribution of roughness in the same geomorphological units where we took the Wolman samples, and we compared with the grain size distributions measured in the field: differences between UAV-point cloud roughness distributions and measured-grain size distribution (~1-2 cm) are in the same order of magnitude of the differences found between the repeated Wolman samples (~0.5-1.5 cm). Differences with LIDAR-derived roughness distributions are only slightly higher, which could be due to the lower point density of the LIDAR point clouds.

  3. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  4. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.

  5. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.

  6. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  7. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  8. Low Clouds and Fog Characterization over Iberian Peninsula using Meteosat Second Generation Images

    NASA Astrophysics Data System (ADS)

    Sánchez, Beatriz; Maqueda, Gregorio

    2014-05-01

    Fog is defined as a collection of suspended water droplets or ice crystals in the air near the Earth's surface that lead to a reduction of horizontal visibility below 1 km (National Oceanic and Atmospheric Administration, 1995). Fog is a stratiform cloud with similar radiative characteristics, for this reason the difference between fog and low stratus clouds is of little importance for remote sensing applications. Fog and low clouds are important atmospheric phenomena, mainly because of their impact on traffic safety and air quality, acting as an obstruction to traffic at land, sea and in the air. The purpose of this work is to develop the method of fog/low clouds detection and analysis on nighttime using Meteosat Second Generation data. This study is focused on the characterization of these atmospheric phenomena in different study cases over the Iberian Peninsula with distinct orography. Firstly, fog/low clouds detection is implemented as a composition of three infrared channels 12.0, 10.8 and 3.9 µm from SEVIRI radiometer on board European geostationary satellite Meteosat (Meteosat-9). The algorithm of detection makes use of a combination of these channels and their differences by creating RGB composites images. On this way, it displays the spatial coverage and location of fog entities. Secondly, this technique allows separating pixels which are indicated as fog/low clouds from clear pixels, assessing the properties of individual pixels using appropriated thresholds of brightness temperature. Thus, it achieves a full analysis of the extent and distribution of fog and its evolution over time. The results of this study have been checked by using ground-based point measurements available as METAR data. Despite the flaws in this sort of inter-comparison approach, the outcome produces to accurate fog/low clouds detection. This work encompasses the way to obtain spatial information from this atmospheric phenomenon by means of satellite imagery.

  9. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  10. Assessing land leveling needs and performance with unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Enciso, Juan; Jung, Jinha; Chang, Anjin; Chavez, Jose Carlos; Yeom, Junho; Landivar, Juan; Cavazos, Gabriel

    2018-01-01

    Land leveling is the initial step for increasing irrigation efficiencies in surface irrigation systems. The objective of this paper was to evaluate potential utilization of an unmanned aerial system (UAS) equipped with a digital camera to map ground elevations of a grower's field and compare them with field measurements. A secondary objective was to use UAS data to obtain a digital terrain model before and after land leveling. UAS data were used to generate orthomosaic images and three-dimensional (3-D) point cloud data by applying the structure for motion algorithm to the images. Ground control points (GCPs) were established around the study area, and they were surveyed using a survey grade dual-frequency GPS unit for accurate georeferencing of the geospatial data products. A digital surface model (DSM) was then generated from the 3-D point cloud data before and after laser leveling to determine the topography before and after the leveling. The UAS-derived DSM was compared with terrain elevation measurements acquired from land surveying equipment for validation. Although 0.3% error or root mean square error of 0.11 m was observed between UAS derived and ground measured ground elevation data, the results indicated that UAS could be an efficient method for determining terrain elevation with an acceptable accuracy when there are no plants on the ground, and it can be used to assess the performance of a land leveling project.

  11. DTM Generation with Uav Based Photogrammetric Point Cloud

    NASA Astrophysics Data System (ADS)

    Polat, N.; Uysal, M.

    2017-11-01

    Nowadays Unmanned Aerial Vehicles (UAVs) are widely used in many applications for different purposes. Their benefits however are not entirely detected due to the integration capabilities of other equipment such as; digital camera, GPS, or laser scanner. The main scope of this paper is evaluating performance of cameras integrated UAV for geomatic applications by the way of Digital Terrain Model (DTM) generation in a small area. In this purpose, 7 ground control points are surveyed with RTK and 420 photographs are captured. Over 30 million georeferenced points were used in DTM generation process. Accuracy of the DTM was evaluated with 5 check points. The root mean square error is calculated as 17.1 cm for an altitude of 100 m. Besides, a LiDAR derived DTM is used as reference in order to calculate correlation. The UAV based DTM has o 94.5 % correlation with reference DTM. Outcomes of the study show that it is possible to use the UAV Photogrammetry data as map producing, surveying, and some other engineering applications with the advantages of low-cost, time conservation, and minimum field work.

  12. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    NASA Astrophysics Data System (ADS)

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  13. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931

  14. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  15. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  16. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  17. 3D point cloud analysis of structured light registration in computer-assisted navigation in spinal surgeries

    NASA Astrophysics Data System (ADS)

    Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.

  18. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  19. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  20. Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.

    2016-06-01

    Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66 cm.

  1. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  2. Large-scale urban point cloud labeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  3. Optimal Exploitation of the Temporal and Spatial Resolution of SEVIRI for the Nowcasting of Clouds

    NASA Astrophysics Data System (ADS)

    Sirch, Tobias; Bugliaro, Luca

    2015-04-01

    Optimal Exploitation of the Temporal and Spatial Resolution of SEVIRI for the Nowcasting of Clouds An algorithm was developed to forecast the development of water and ice clouds for the successive 5-120 minutes separately using satellite data from SEVIRI (Spinning Enhanced Visible and Infrared Imager) aboard Meteosat Second Generation (MSG). In order to derive cloud cover, optical thickness and cloud top height of high ice clouds "The Cirrus Optical properties derived from CALIOP and SEVIRI during day and night" (COCS, Kox et al. [2014]) algorithm is applied. For the determination of the liquid water clouds the APICS ("Algorithm for the Physical Investigation of Clouds with SEVIRI", Bugliaro e al. [2011]) cloud algorithm is used, which provides cloud cover, optical thickness and effective radius. The forecast rests upon an optical flow method determining a motion vector field from two satellite images [Zinner et al., 2008.] With the aim of determining the ideal time separation of the satellite images that are used for the determination of the cloud motion vector field for every forecast horizon time the potential of the better temporal resolution of the Meteosat Rapid Scan Service (5 instead of 15 minutes repetition rate) has been investigated. Therefore for the period from March to June 2013 forecasts up to 4 hours in time steps of 5 min based on images separated by a time interval of 5 min, 10 min, 15 min, 30 min have been created. The results show that Rapid Scan data produces a small reduction of errors for a forecast horizon up to 30 minutes. For the following time steps forecasts generated with a time interval of 15 min should be used and for forecasts up to several hours computations with a time interval of 30 min provide the best results. For a better spatial resolution the HRV channel (High Resolution Visible, 1km instead of 3km maximum spatial resolution at the subsatellite point) has been integrated into the forecast. To detect clouds the difference of the measured albedo from SEVIRI and the clear-sky albedo provided by MODIS has been used and additionally the temporal development of this quantity. A pre-requisite for this work was an adjustment of the geolocation accuracy for MSG and MODIS by shifting the MODIS data and quantifying the correlation between both data sets.

  4. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  5. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    NASA Astrophysics Data System (ADS)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  6. A Case Study of Reverse Engineering Integrated in an Automated Design Process

    NASA Astrophysics Data System (ADS)

    Pescaru, R.; Kyratsis, P.; Oancea, G.

    2016-11-01

    This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.

  7. Continuum Limit of Total Variation on Point Clouds

    NASA Astrophysics Data System (ADS)

    García Trillos, Nicolás; Slepčev, Dejan

    2016-04-01

    We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.

  8. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  9. Observational Evidence Against Mountain-Wave Generation of Ice Nuclei as a Prerequisite for the Formation of Three Solid Nitric Acid Polar Stratospheric Clouds Observed in the Arctic in Early December 1999

    NASA Technical Reports Server (NTRS)

    Pagan, Kathy L.; Tabazadeh, Azadeh; Drdla, Katja; Hervig, Mark E.; Eckermann, Stephen D.; Browell, Edward V.; Legg, Marion J.; Foschi, Patricia G.

    2004-01-01

    A number of recently published papers suggest that mountain-wave activity in the stratosphere, producing ice particles when temperatures drop below the ice frost point, may be the primary source of large NAT particles. In this paper we use measurements from the Advanced Very High Resolution Radiometer (AVHRR) instruments on board the National Oceanic and Atmospheric Administration (NOAA) polar-orbiting satellites to map out regions of ice clouds produced by stratospheric mountain-wave activity inside the Arctic vortex. Lidar observations from three DC-8 flights in early December 1999 show the presence of solid nitric acid (Type Ia or NAT) polar stratospheric clouds (PSCs). By using back trajectories and superimposing the position maps on the AVHRR cloud imagery products, we show that these observed NAT clouds could not have originated at locations of high-amplitude mountain-wave activity. We also show that mountain-wave PSC climatology data and Mountain Wave Forecast Model 2.0 (MWFM-2) raw hemispheric ray and grid box averaged hemispheric wave temperature amplitude hindcast data from the same time period are in agreement with the AVHRR data. Our results show that ice cloud formation in mountain waves cannot explain how at least three large scale NAT clouds were formed in the stratosphere in early December 1999.

  10. On the performance of metrics to predict quality in point cloud representations

    NASA Astrophysics Data System (ADS)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  11. Semantic Segmentation of Building Elements Using Point Cloud Hashing

    NASA Astrophysics Data System (ADS)

    Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.

    2018-05-01

    For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).

  12. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  13. Using Radar, Lidar, and Radiometer measurements to Classify Cloud Type and Study Middle-Level Cloud Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    2010-06-29

    The project is mainly focused on the characterization of cloud macrophysical and microphysical properties, especially for mixed-phased clouds and middle level ice clouds by combining radar, lidar, and radiometer measurements available from the ACRF sites. First, an advanced mixed-phase cloud retrieval algorithm will be developed to cover all mixed-phase clouds observed at the ACRF NSA site. The algorithm will be applied to the ACRF NSA observations to generate a long-term arctic mixed-phase cloud product for model validations and arctic mixed-phase cloud processes studies. To improve the representation of arctic mixed-phase clouds in GCMs, an advanced understanding of mixed-phase cloud processesmore » is needed. By combining retrieved mixed-phase cloud microphysical properties with in situ data and large-scale meteorological data, the project aim to better understand the generations of ice crystals in supercooled water clouds, the maintenance mechanisms of the arctic mixed-phase clouds, and their connections with large-scale dynamics. The project will try to develop a new retrieval algorithm to study more complex mixed-phase clouds observed at the ACRF SGP site. Compared with optically thin ice clouds, optically thick middle level ice clouds are less studied because of limited available tools. The project will develop a new two wavelength radar technique for optically thick ice cloud study at SGP site by combining the MMCR with the W-band radar measurements. With this new algorithm, the SGP site will have a better capability to study all ice clouds. Another area of the proposal is to generate long-term cloud type classification product for the multiple ACRF sites. The cloud type classification product will not only facilitates the generation of the integrated cloud product by applying different retrieval algorithms to different types of clouds operationally, but will also support other research to better understand cloud properties and to validate model simulations. The ultimate goal is to improve our cloud classification algorithm into a VAP.« less

  14. Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.

    2017-09-01

    Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  15. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  16. Low Cost and Efficient 3d Indoor Mapping Using Multiple Consumer Rgb-D Cameras

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yang, B. S.; Song, S.

    2016-06-01

    Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.

  17. Indoor A* Pathfinding Through an Octree Representation of a Point Cloud

    NASA Astrophysics Data System (ADS)

    Rodenberg, O. B. P. M.; Verbree, E.; Zlatanova, S.

    2016-10-01

    There is a growing demand of 3D indoor pathfinding applications. Researched in the field of robotics during the last decades of the 20th century, these methods focussed on 2D navigation. Nowadays we would like to have the ability to help people navigate inside buildings or send a drone inside a building when this is too dangerous for people. What these examples have in common is that an object with a certain geometry needs to find an optimal collision free path between a start and goal point. This paper presents a new workflow for pathfinding through an octree representation of a point cloud. We applied the following steps: 1) the point cloud is processed so it fits best in an octree; 2) during the octree generation the interior empty nodes are filtered and further processed; 3) for each interior empty node the distance to the closest occupied node directly under it is computed; 4) a network graph is computed for all empty nodes; 5) the A* pathfinding algorithm is conducted. This workflow takes into account the connectivity for each node to all possible neighbours (face, edge and vertex and all sizes). Besides, a collision avoidance system is pre-processed in two steps: first, the clearance of each empty node is computed, and then the maximal crossing value between two empty neighbouring nodes is computed. The clearance is used to select interior empty nodes of appropriate size and the maximal crossing value is used to filter the network graph. Finally, both these datasets are used in A* pathfinding.

  18. Integrated Change Detection and Classification in Urban Areas Based on Airborne Laser Scanning Point Clouds.

    PubMed

    Tran, Thi Huong Giang; Ressl, Camillo; Pfeifer, Norbert

    2018-02-03

    This paper suggests a new approach for change detection (CD) in 3D point clouds. It combines classification and CD in one step using machine learning. The point cloud data of both epochs are merged for computing features of four types: features describing the point distribution, a feature relating to relative terrain elevation, features specific for the multi-target capability of laser scanning, and features combining the point clouds of both epochs to identify the change. All these features are merged in the points and then training samples are acquired to create the model for supervised classification, which is then applied to the whole study area. The final results reach an overall accuracy of over 90% for both epochs of eight classes: lost tree, new tree, lost building, new building, changed ground, unchanged building, unchanged tree, and unchanged ground.

  19. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    NASA Astrophysics Data System (ADS)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  20. Modeling right-lateral offset of a Late Pleistocene terrace riser along the Polaris fault using ground based LiDAR imagery

    NASA Astrophysics Data System (ADS)

    Howle, J. F.; Bawden, G. W.; Hunter, L. E.; Rose, R. S.

    2009-12-01

    High resolution (centimeter level) three-dimensional point-cloud imagery of offset glacial outwash deposits were collected by using ground based tripod LiDAR (T-LiDAR) to characterize the cumulative fault slip across the recently identified Polaris fault (Hunter et al., 2009) near Truckee, California. The type-section site for the Polaris fault is located 6.5 km east of Truckee where progressive right-lateral displacement of middle to late Pleistocene deposits is evident. Glacial outwash deposits, aggraded during the Tioga glaciation, form a flat lying ‘fill’ terrace on both the north and south sides of the modern Truckee River. During the Tioga deglaciation melt water incised into the terrace producing fluvial scarps or terrace risers (Birkeland, 1964). Subsequently, the terrace risers on both banks have been right-laterally offset by the Polaris fault. By using T-LiDAR on an elevated tripod (4.25 m high), we collected 3D high-resolution (thousands of points per square meter; ± 4 mm) point-cloud imagery of the offset terrace risers. Vegetation was removed from the data using commercial software, and large protruding boulders were manually deleted to generate a bare-earth point-cloud dataset with an average data density of over 240 points per square meter. From the bare-earth point cloud we mathematically reconstructed a pristine terrace/scarp morphology on both sides of the fault, defined coupled sets of piercing points, and extracted a corresponding displacement vector. First, the Polaris fault was approximated as a vertical plane that bisects the offset terrace risers, as well as bisecting linear swales and tectonic depressions in the outwash terrace. Then, piercing points to the vertical fault plane were constructed from the geometry of the geomorphic elements on either side of the fault. On each side of the fault, the best-fit modeled outwash plane is projected laterally and the best-fit modeled terrace riser projected upward to a virtual intersection in space, creating a vector. These constructed vectors were projected to intersection with the fault plane, defining statistically significant piercing points. The distance between the coupled set of piercing points, within the plane of the fault, is the cumulative displacement vector. To assess the variability of the modeled geomorphic surfaces, including surface roughness and nonlinearity, we generated a suite of displacement models by systematically incorporating larger areas of the model domain symmetrically about the fault. Preliminary results of 10 models yield an average cumulative displacement of 5.6 m (1 Std Dev = 0.31 m). As previously described, Tioga deglaciation melt water incised into the outwash terrace leaving terrace risers that were subsequently offset by the Polaris fault. Therefore, the age of the Tioga outwash terrace represents a maximum limiting age of the tectonic displacement. Using regional age constraints of 15 to 13 kya for the Tioga outwash terrace (Benson et al., 1990; Clark and Gillespie, 1997; James et al., 2002) and the above model results, we estimate a preliminary minimum fault slip rate of 0.40 ± 0.05 mm/yr for the Polaris type-section site.

  1. Cortical Surface Registration for Image-Guided Neurosurgery Using Laser-Range Scanning

    PubMed Central

    Sinha, Tuhin K.; Cash, David M.; Galloway, Robert L.; Weil, Robert J.

    2013-01-01

    In this paper, a method of acquiring intraoperative data using a laser range scanner (LRS) is presented within the context of model-updated image-guided surgery. Registering textured point clouds generated by the LRS to tomographic data is explored using established point-based and surface techniques as well as a novel method that incorporates geometry and intensity information via mutual information (SurfaceMI). Phantom registration studies were performed to examine accuracy and robustness for each framework. In addition, an in vivo registration is performed to demonstrate feasibility of the data acquisition system in the operating room. Results indicate that SurfaceMI performed better in many cases than point-based (PBR) and iterative closest point (ICP) methods for registration of textured point clouds. Mean target registration error (TRE) for simulated deep tissue targets in a phantom were 1.0 ± 0.2, 2.0 ± 0.3, and 1.2 ± 0.3 mm for PBR, ICP, and SurfaceMI, respectively. With regard to in vivo registration, the mean TRE of vessel contour points for each framework was 1.9 ± 1.0, 0 9 ± 0.6, and 1.3 ± 0.5 for PBR, ICP, and SurfaceMI, respectively. The methods discussed in this paper in conjunction with the quantitative data provide impetus for using LRS technology within the model-updated image-guided surgery framework. PMID:12906252

  2. Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Yao, C.; Zhang, X.; Liu, H.

    2017-09-01

    The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.

  3. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  4. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    DTIC Science & Technology

    2017-04-01

    ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms

  5. Inventory of File WAFS_blended_2014102006f06.grib2

    Science.gov Websites

    ) [%] 004 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 005 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial max,code table 4.15=3,#points=1 006 600 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 007 600 mb CTP 6 hour fcst In

  6. Observations of the boundary layer, cloud, and aerosol variability in the southeast Pacific near-coastal marine stratocumulus during VOCALS-REx

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.

    2011-09-01

    Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud- Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. On days without predominately synoptic and meso-scale influences, the BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL. Entrainment rates calculated from the near cloud-top fluxes and turbulence in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. The BL had a depth of 1140 ± 120 m, was generally well-mixed and capped by a sharp inversion without predominately synoptic and meso-scale influences. The wind direction generally switched from southerly within the BL to northerly above the inversion. On days when a synoptic system and related mesoscale costal circulations affected conditions at Point Alpha (29 October-4 November), a moist layer above the inversion moved over Point Alpha, and the total-water mixing ratio above the inversion was larger than that within the BL. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft in situ observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.

  7. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  8. Automated estimation of leaf distribution for individual trees based on TLS point clouds

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus

    2017-04-01

    Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.

  9. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information for each modeled voxel and interpolated vertices that can be a useful attributes for clustering during data treatment. We thus illustrate such applications to the Rochefort cave by using both sources of 3D information to quantify the orientation of inaccessible geological structures (e.g. faults, tectonic and gravitational joints, and sediments bedding), cluster these structures using color information gathered from UAV's 3D point cloud and compare these data to structural data surveyed on the field. An additional drone photoscan was also conducted in the surface sinkhole giving access to the surveyed underground cavity to seek geological bodies' connections.

  10. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.

  11. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  12. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  13. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  14. Cloud-point detection using a portable thickness shear mode crystal resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansure, A.J.; Spates, J.J.; Germer, J.W.

    1997-08-01

    The Thickness Shear Mode (TSM) crystal resonator monitors the crude oil by propagating a shear wave into the oil. The coupling of the shear wave and the crystal vibrations is a function of the viscosity of the oil. By driving the crystal with circuitry that incorporates feedback, it is possible to determine the change from Newtonian to non-Newtonian viscosity at the cloud point. A portable prototype TSM Cloud Point Detector (CPD) has performed flawlessly during field and lab tests proving the technique is less subjective or operator dependent than the ASTM standard. The TSM CPD, in contrast to standard viscositymore » techniques, makes the measurement in a closed container capable of maintaining up to 100 psi. The closed container minimizes losses of low molecular weight volatiles, allowing samples (25 ml) to be retested with the addition of chemicals. By cycling/thermal soaking the sample, the effects of thermal history can be investigated and eliminated as a source of confusion. The CPD is portable, suitable for shipping the field offices for use by personnel without special training or experience in cloud point measurements. As such, it can make cloud point data available without the delays and inconvenience of sending samples to special labs. The crystal resonator technology can be adapted to in-line monitoring of cloud point and deposition detection.« less

  15. Parametric Accuracy: Building Information Modeling Process Applied to the Cultural Heritage Preservation

    NASA Astrophysics Data System (ADS)

    Garagnani, S.; Manferdini, A. M.

    2013-02-01

    Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.

  16. Photogrammetric Analysis of Historical Image Repositories for Virtual Reconstruction in the Field of Digital Humanities

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2017-02-01

    Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  17. Allowing for Horizontally Heterogeneous Clouds and Generalized Overlap in an Atmospheric GCM

    NASA Technical Reports Server (NTRS)

    Lee, D.; Oreopoulos, L.; Suarez, M.

    2011-01-01

    While fully accounting for 3D effects in Global Climate Models (GCMs) appears not realistic at the present time for a variety of reasons such as computational cost and unavailability of 3D cloud structure in the models, incorporation in radiation schemes of subgrid cloud variability described by one-point statistics is now considered feasible and is being actively pursued. This development has gained momentum once it was demonstrated that CPU-intensive spectrally explicit Independent Column Approximation (lCA) can be substituted by stochastic Monte Carlo ICA (McICA) calculations where spectral integration is accomplished in a manner that produces relatively benign random noise. The McICA approach has been implemented in Goddard's GEOS-5 atmospheric GCM as part of the implementation of the RRTMG radiation package. GEOS-5 with McICA and RRTMG can handle horizontally variable clouds which can be set via a cloud generator to arbitrarily overlap within the full spectrum of maximum and random both in terms of cloud fraction and layer condensate distributions. In our presentation we will show radiative and other impacts of the combined horizontal and vertical cloud variability on multi-year simulations of an otherwise untuned GEOS-5 with fixed SSTs. Introducing cloud horizontal heterogeneity without changing the mean amounts of condensate reduces reflected solar and increases thermal radiation to space, but disproportionate changes may increase the radiative imbalance at TOA. The net radiation at TOA can be modulated by allowing the parameters of the generalized overlap and heterogeneity scheme to vary, a dependence whose behavior we will discuss. The sensitivity of the cloud radiative forcing to the parameters of cloud horizontal heterogeneity and comparisons of CERES-derived forcing will be shown.

  18. Geomorphological activity at a rock glacier front detected with a 3D density-based clustering algorithm

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2017-02-01

    Acquisition of high density point clouds using terrestrial laser scanners (TLSs) has become commonplace in geomorphic science. The derived point clouds are often interpolated onto regular grids and the grids compared to detect change (i.e. erosion and deposition/advancement movements). This procedure is necessary for some applications (e.g. digital terrain analysis), but it inevitably leads to a certain loss of potentially valuable information contained within the point clouds. In the present study, an alternative methodology for geomorphological analysis and feature detection from point clouds is proposed. It rests on the use of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), applied to TLS data for a rock glacier front slope in the Swiss Alps. The proposed methods allowed the detection and isolation of movements directly from point clouds which yield to accuracies in the following computation of volumes that depend only on the actual registered distance between points. We demonstrated that these values are more conservative than volumes computed with the traditional DEM comparison. The results are illustrated for the summer of 2015, a season of enhanced geomorphic activity associated with exceptionally high temperatures.

  19. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  20. Integrated system for point cloud reconstruction and simulated brain shift validation using tracked surgical microscope

    NASA Astrophysics Data System (ADS)

    Yang, Xiaochen; Clements, Logan W.; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2017-03-01

    Intra-operative soft tissue deformation, referred to as brain shift, compromises the application of current imageguided surgery (IGS) navigation systems in neurosurgery. A computational model driven by sparse data has been used as a cost effective method to compensate for cortical surface and volumetric displacements. Stereoscopic microscopes and laser range scanners (LRS) are the two most investigated sparse intra-operative imaging modalities for driving these systems. However, integrating these devices in the clinical workflow to facilitate development and evaluation requires developing systems that easily permit data acquisition and processing. In this work we present a mock environment developed to acquire stereo images from a tracked operating microscope and to reconstruct 3D point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space in order to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. Our experimental results report approximately 2mm average displacement error compared with the optical tracking system. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to LRS to collect sufficient intraoperative information for brain shift correction.

  1. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  2. Unusual July 10, 1996, rock fall at Happy Isles, Yosemite National Park, California

    USGS Publications Warehouse

    Wieczorek, G.F.; Snyder, J.B.; Waitt, R.B.; Morrissey, M.M.; Uhrhammer, R.A.; Harp, E.L.; Norris, R.D.; Bursik, M.I.; Finewood, L.G.

    2000-01-01

    Effects of the July 10, 1996, rock fall at Happy Isles in Yosemite National Park, California, were unusual compared to most rock falls. Two main rock masses fell about 14 s apart from a 665-m-high cliff southeast of Glacier Point onto a talus slope above Happy Isles in the eastern part of Yosemite Valley. The two impacts were recorded by seismographs as much as 200 km away. Although the impact area of the rock falls was not particularly large, the falls generated an airblast and an abrasive dense sandy cloud that devastated a larger area downslope of the impact sites toward the Happy Isles Nature Center. Immediately downslope of the impacts, the airblast had velocities exceeding 110 m/s and toppled or snapped about 1000 trees. Even at distances of 0.5 km from impact, wind velocities snapped or toppled large trees, causing one fatality and several serious injuries beyond the Happy Isles Nature Center. A dense sandy cloud trailed the airblast and abraded fallen trunks and trees left standing. The Happy Isles rock fall is one of the few known worldwide to have generated an airblast and abrasive dense sandy cloud. The relatively high velocity of the rock fall at impact, estimated to be 110-120 m/s, influenced the severity and areal extent of the airblast at Happy Isles. Specific geologic and topographic conditions, typical of steep glaciated valleys and mountainous terrain, contributed to the rock-fall release and determined its travel path, resulting in a high velocity at impact that generated the devastating airblast and sandy cloud. The unusual effects of this rock fall emphasize the importance of considering collateral geologic hazards, such as airblasts from rock falls, in hazard assessment and planning development of mountainous areas.

  3. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    NASA Astrophysics Data System (ADS)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  4. Point Cloud Generation from sUAS-Mounted iPhone Imagery: Performance Analysis

    NASA Astrophysics Data System (ADS)

    Ladai, A. D.; Miller, J.

    2014-11-01

    The rapidly growing use of sUAS technology and fast sensor developments continuously inspire mapping professionals to experiment with low-cost airborne systems. Smartphones has all the sensors used in modern airborne surveying systems, including GPS, IMU, camera, etc. Of course, the performance level of the sensors differs by orders, yet it is intriguing to assess the potential of using inexpensive sensors installed on sUAS systems for topographic applications. This paper focuses on the quality analysis of point clouds generated based on overlapping images acquired by an iPhone 5s mounted on a sUAS platform. To support the investigation, test data was acquired over an area with complex topography and varying vegetation. In addition, extensive ground control, including GCPs and transects were collected with GSP and traditional geodetic surveying methods. The statistical and visual analysis is based on a comparison of the UAS data and reference dataset. The results with the evaluation provide a realistic measure of data acquisition system performance. The paper also gives a recommendation for data processing workflow to achieve the best quality of the final products: the digital terrain model and orthophoto mosaic. After a successful data collection the main question is always the reliability and the accuracy of the georeferenced data.

  5. Anatomical evaluation and stress distribution of intact canine femur.

    PubMed

    Verim, Ozgur; Tasgetiren, Suleyman; Er, Mehmet S; Ozdemir, Vural; Yuran, Ahmet F

    2013-03-01

    In the biomedical field, three-dimensional (3D) modeling and analysis of bones and tissues has steadily gained in importance. The aim of this study was to produce more accurate 3D models of the canine femur derived from computed tomography (CT) data by using several modeling software programs and two different methods. The accuracy of the analysis depends on the modeling process and the right boundary conditions. Solidworks, Rapidform, Inventor, and 3DsMax software programs were used to create 3D models. Data derived from CT were converted into 3D models using two different methods: in the first, 3D models were generated using boundary lines, while in the second, 3D models were generated using point clouds. Stress analyses in the models were made by ANSYS v12, also considering any muscle forces acting on the canine femur. When stress values and statistical values were taken into consideration, more accurate models were obtained with the point cloud method. It was found that the maximum von Mises stress on the canine femur shaft was 34.8 MPa. Stress and accuracy values were obtained from the model formed using the Rapidform software. The values obtained were similar to those in other studies in the literature. Copyright © 2012 John Wiley & Sons, Ltd.

  6. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  7. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  8. Observational evidence for the aerosol impact on ice cloud properties regulated by cloud/aerosol types

    NASA Astrophysics Data System (ADS)

    Zhao, B.; Gu, Y.; Liou, K. N.; Jiang, J. H.; Li, Q.; Liu, X.; Huang, L.; Wang, Y.; Su, H.

    2016-12-01

    The interactions between aerosols and ice clouds (consisting only of ice) represent one of the largest uncertainties in global radiative forcing from pre-industrial time to the present. The observational evidence for the aerosol impact on ice cloud properties has been quite limited and showed conflicting results, partly because previous observational studies did not consider the distinct features of different ice cloud and aerosol types. Using 9-year satellite observations, we find that, for ice clouds generated from deep convection, cloud thickness, cloud optical thickness (COT), and ice cloud fraction increase and decrease with small-to-moderate and high aerosol loadings, respectively. For in-situ formed ice clouds, however, the preceding cloud properties increase monotonically and more sharply with aerosol loadings. The case is more complicated for ice crystal effective radius (Rei). For both convection-generated and in-situ ice clouds, the responses of Rei to aerosol loadings are modulated by water vapor amount in conjunction with several other meteorological parameters, but the sensitivities of Rei to aerosols under the same water vapor amount differ remarkably between the two ice cloud types. As a result, overall Rei slightly increases with aerosol loading for convection-generated ice clouds, but decreases for in-situ ice clouds. When aerosols are decomposed into different types, an increase in the loading of smoke aerosols generally leads to a decrease in COT of convection-generated ice clouds, while the reverse is true for dust and anthropogenic pollution. In contrast, an increase in the loading of any aerosol type can significantly enhance COT of in-situ ice clouds. The modulation of the aerosol impacts by cloud/aerosol types is demonstrated and reproduced by simulations using the Weather Research and Forecasting (WRF) model. Adequate and accurate representations of the impact of different cloud/aerosol types in climate models are crucial for reducing the substantial uncertainty in assessment of the aerosol-ice cloud radiative forcing.

  9. Observational evidence for the aerosol impact on ice cloud properties regulated by cloud/aerosol types

    NASA Astrophysics Data System (ADS)

    Zhao, B.; Gu, Y.; Liou, K. N.; Jiang, J. H.; Li, Q.; Liu, X.; Huang, L.; Wang, Y.; Su, H.

    2017-12-01

    The interactions between aerosols and ice clouds (consisting only of ice) represent one of the largest uncertainties in global radiative forcing from pre-industrial time to the present. The observational evidence for the aerosol impact on ice cloud properties has been quite limited and showed conflicting results, partly because previous observational studies did not consider the distinct features of different ice cloud and aerosol types. Using 9-year satellite observations, we find that, for ice clouds generated from deep convection, cloud thickness, cloud optical thickness (COT), and ice cloud fraction increase and decrease with small-to-moderate and high aerosol loadings, respectively. For in-situ formed ice clouds, however, the preceding cloud properties increase monotonically and more sharply with aerosol loadings. The case is more complicated for ice crystal effective radius (Rei). For both convection-generated and in-situ ice clouds, the responses of Rei to aerosol loadings are modulated by water vapor amount in conjunction with several other meteorological parameters, but the sensitivities of Rei to aerosols under the same water vapor amount differ remarkably between the two ice cloud types. As a result, overall Rei slightly increases with aerosol loading for convection-generated ice clouds, but decreases for in-situ ice clouds. When aerosols are decomposed into different types, an increase in the loading of smoke aerosols generally leads to a decrease in COT of convection-generated ice clouds, while the reverse is true for dust and anthropogenic pollution. In contrast, an increase in the loading of any aerosol type can significantly enhance COT of in-situ ice clouds. The modulation of the aerosol impacts by cloud/aerosol types is demonstrated and reproduced by simulations using the Weather Research and Forecasting (WRF) model. Adequate and accurate representations of the impact of different cloud/aerosol types in climate models are crucial for reducing the substantial uncertainty in assessment of the aerosol-ice cloud radiative forcing.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Shawn

    This code consists of Matlab routines which enable the user to perform non-manifold surface reconstruction via triangulation from high dimensional point cloud data. The code was based on an algorithm originally developed in [Freedman (2007), An Incremental Algorithm for Reconstruction of Surfaces of Arbitrary Codimension Computational Geometry: Theory and Applications, 36(2):106-116]. This algorithm has been modified to accommodate non-manifold surface according to the work described in [S. Martin and J.-P. Watson (2009), Non-Manifold Surface Reconstruction from High Dimensional Point Cloud DataSAND #5272610].The motivation for developing the code was a point cloud describing the molecular conformation space of cyclooctane (C8H16). Cyclooctanemore » conformation space was represented using points in 72 dimensions (3 coordinates for each molecule). The code was used to triangulate the point cloud and thereby study the geometry and topology of cyclooctane. Futures applications are envisioned for peptides and proteins.« less

  11. Classification of Mobile Laser Scanning Point Clouds from Height Features

    NASA Astrophysics Data System (ADS)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  12. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    NASA Astrophysics Data System (ADS)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  13. The Registration and Segmentation of Heterogeneous Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Al-Durgham, Mohannad M.

    Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.

  14. Effect of electromagnetic field on Kordylewski clouds formation

    NASA Astrophysics Data System (ADS)

    Salnikova, Tatiana; Stepanov, Sergey

    2018-05-01

    In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.

  15. Observations of the boundary layer, cloud, and aerosol variability in the southeast Pacific coastal marine stratocumulus during VOCALS-REx

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.

    2011-05-01

    Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud-Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. The BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL on days without predominately synoptic and meso-scale influences. The BL had a depth of 1140 ± 120 m, was well-mixed and capped by a sharp inversion. The wind direction generally switched from southerly within the BL to northerly above the inversion. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. From 29 October to 4 November, when a synoptic system affected conditions at Point Alpha, the cloud LWP was higher than on the other days by around 40 g m-2. On 1 and 2 November, a moist layer above the inversion moved over Point Alpha. The total-water specific humidity above the inversion was larger than that within the BL during these days. Entrainment rates (average of 1.5 ± 0.6 mm s-1) calculated from the near cloud-top fluxes and turbulence (vertical velocity variance) in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3, which was consistent with the satellite-derived values. The relationship of cloud droplet number concentration and CCN at 0.2 % supersaturation from 18 flights is Nd =4.6 × CCN0.71. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft {in situ} observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.

    Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less

  17. Strong Deformation of the Thick Electric Double Layer around a Charged Particle during Sedimentation or Electrophoresis.

    PubMed

    Khair, Aditya S

    2018-01-23

    The deformation of the electric double layer around a charged colloidal particle during sedimentation or electrophoresis in a binary, symmetric electrolyte is studied. The surface potential of the particle is assumed to be small compared to the thermal voltage scale. Additionally, the Debye length is assumed to be large compared to the particle size. These assumptions enable a linearization of the electrokinetic equations. The particle appears as a point charge in this thick-double-layer limit; the distribution of charge in the diffuse cloud surrounding it is determined by a balance of advection due to the particle motion, Brownian diffusion of ions, and electrostatic screening of the particle by the cloud. The ability of advection to deform the charge cloud from its equilibrium state is parametrized by a Péclet number, Pe. For weak advection (Pe ≪ 1), the cloud is only slightly deformed. In contrast, the cloud can be completely stripped from the particle at Pe ≫ 1; consequently, electrokinetic effects on the particle motion vanish in this regime. Therefore, in sedimentation the drag limits to Stokes' law for an uncharged particle as Pe → ∞. Likewise, the particle velocity for electrophoresis approaches Huckel's result. The strongly deformed cloud at large Pe is predicted to generate a concomitant increase in the sedimentation field in a dilute settling suspension.

  18. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  19. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  20. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  1. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  2. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  3. Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Zuo, C.; Xiao, X.; Hou, Q.; Li, B.

    2018-05-01

    WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.

  4. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  5. 3D Reconstruction of Static Human Body with a Digital Camera

    NASA Astrophysics Data System (ADS)

    Remondino, Fabio

    2003-01-01

    Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.

  6. 3D reconstruction from non-uniform point clouds via local hierarchical clustering

    NASA Astrophysics Data System (ADS)

    Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo

    2017-07-01

    Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.

  7. An experimental comparison of standard stereo matching algorithms applied to cloud top height estimation from satellite IR images

    NASA Astrophysics Data System (ADS)

    Anzalone, Anna; Isgrò, Francesco

    2016-10-01

    The JEM-EUSO (Japanese Experiment Module-Extreme Universe Space Observatory) telescope will measure Ultra High Energy Cosmic Ray properties by detecting the UV fluorescent light generated in the interaction between cosmic rays and the atmosphere. Cloud information is crucial for a proper interpretation of these data. The problem of recovering the cloud-top height from satellite images in infrared has struck some attention over the last few decades, as a valuable tool for the atmospheric monitoring. A number of radiative methods do exist, like C02 slicing and Split Window algorithms, using one or more infrared bands. A different way to tackle the problem is, when possible, to exploit the availability of multiple views, and recover the cloud top height through stereo imaging and triangulation. A crucial step in the 3D reconstruction is the process that attempts to match a characteristic point or features selected in one image, with one of those detected in the second image. In this article the performance of a group matching algorithms that include both area-based and global techniques, has been tested. They are applied to stereo pairs of satellite IR images with the final aim of evaluating the cloud top height. Cloudy images from SEVIRI on the geostationary Meteosat Second Generation 9 and 10 (MSG-2, MSG-3) have been selected. After having applied to the cloudy scenes the algorithms for stereo matching, the outcoming maps of disparity are transformed in depth maps according to the geometry of the reference data system. As ground truth we have used the height maps provided by the database of MODIS (Moderate Resolution Imaging Spectroradiometer) on-board Terra/Aqua polar satellites, that contains images quasi-synchronous to the imaging provided by MSG.

  8. Efficient characterization of inhomogeneity in contraction strain pattern.

    PubMed

    Nazzal, Christina M; Mulligan, Lawrence J; Criscione, John C

    2012-05-01

    Cardiac dyssynchrony often accompanies patients with heart failure (HF) and can lead to an increase in mortality rate. Cardiac resynchronization therapy (CRT) has been shown to provide substantial benefits to the HF population with ventricular dyssynchrony; however, there still exists a group of patients who do not respond to this treatment. In order to better understand patient response to CRT, it is necessary to quantitatively characterize both electrical and mechanical dyssynchrony. The quantification of mechanical dyssynchrony via characterization of contraction strain field inhomogeneity is the focus of this modeling investigation. Raw data from a 3D finite element (FE) model were received from Roy Kerckhoffs et al. and analyzed in MATLAB. The FE model consisted of canine left and right ventricles coupled to a closed circulation with the effects of the pericardium acting as a pressure on the epicardial surface. For each of three simulations (normal synchronous, SYNC, right ventricular apical pacing, RVA, and left ventricular free wall pacing, LVFW) the Gauss point locations and values were used to generate lookup tables (LUTs) with each entry representing a location in the heart. In essence, we employed piecewise cubic interpolation to generate a fine point cloud (LUTs) from a course point cloud (Gauss points). Strain was calculated in the fiber direction and was then displayed in multiple ways to better characterize strain inhomogeneity. By plotting average strain and standard deviation over time, the point of maximum contraction and the point of maximal inhomogeneity were found for each simulation. Strain values were organized into seven strain bins to show operative strain ranges and extent of inhomogeneity throughout the heart wall. In order to visualize strain propagation, magnitude, and inhomogeneity over time, we created 2D area maps displaying strain over the entire cardiac cycle. To visualize spatial strain distribution at the time point of maximum inhomogeneity, a 3D point cloud was created for each simulation, and a CURE index was calculated. We found that both the RVA and LFVW simulations took longer to reach maximum contraction than the SYNC simulation, while also exhibiting larger disparities in strain values during contraction. Strain in the hoop direction was also analyzed and was found to be similar to the fiber strain results. It was found that our method of analyzing contraction strain pattern yielded more detailed spacial and temporal information about fiber strain in the heart over the cardiac cycle than the more conventional CURE index method. We also observed that our method of strain binning aids in visualization of the strain fields, and in particular, the separation of the mass points into separate images associated with each strain bin allows the strain pattern to be explicitly compartmentalized.

  9. Characterizing the collapse of a cavitation bubble cloud in a focused ultrasound field

    NASA Astrophysics Data System (ADS)

    Maeda, Kazuki; Colonius, Tim

    2017-11-01

    We study the coherent collapse of clouds of cavitation bubbles generated by the passage of a pulse of ultrasound. In order to characterize such collapse, we conduct a parametric study on the dynamics of a spherical bubble cloud with a radius of r = O(1) mm interacting with traveling ultrasound waves with an amplitude of pa = O(102 -106) Pa and a wavelength of λ = O(1 - 10) mm in water. Bubbles with a radius of O(10) um are treated as spherical, radially oscillating cavities dispersed in continuous liquid phase. The volume of Lagrangian point bubbles is mapped with a regularization kernel as void fraction onto Cartesian grids that defines the Eulerian liquid phase. The flow field is solved using a WENO-based compressible flow solver. We identified that coherent collapse occurs when λ >> r , regardless of the value of pa, while it only occurs for sufficiently high pa when λ r . For the long wavelength case, the results agree with the theory on linearized dynamics of d'Agostino and Brennen (1989). We extend the theory to short wave length case. Finally, we analyze the far-field acoustics scattered by individual bubbles and correlate them with the cloud collapse, for applications to acoustic imaging of bubble cloud dynamics. Funding supported by NIH P01-DK043881.

  10. Externally fed star formation: a numerical study

    NASA Astrophysics Data System (ADS)

    Mohammadpour, Motahareh; Stahler, Steven W.

    2013-08-01

    We investigate, through a series of numerical calculations, the evolution of dense cores that are accreting external gas up to and beyond the point of star formation. Our model clouds are spherical, unmagnetized configurations with fixed outer boundaries, across which gas enters subsonically. When we start with any near-equilibrium state, we find that the cloud's internal velocity also remains subsonic for an extended period, in agreement with observations. However, the velocity becomes supersonic shortly before the star forms. Consequently, the accretion rate building up the protostar is much greater than the benchmark value c_s^3/G, where cs is the sound speed in the dense core. This accretion spike would generate a higher luminosity than those seen in even the most embedded young stars. Moreover, we find that the region of supersonic infall surrounding the protostar races out to engulf much of the cloud, again in violation of the observations, which show infall to be spatially confined. Similar problematic results have been obtained by all other hydrodynamic simulations to date, regardless of the specific infall geometry or boundary conditions adopted. Low-mass star formation is evidently a quasi-static process, in which cloud gas moves inward subsonically until the birth of the star itself. We speculate that magnetic tension in the cloud's deep interior helps restrain the infall prior to this event.

  11. Applications of low altitude photogrammetry for morphometry, displacements, and landform modeling

    NASA Astrophysics Data System (ADS)

    Gomez, F. G.; Polun, S. G.; Hickcox, K.; Miles, C.; Delisle, C.; Beem, J. R.

    2016-12-01

    Low-altitude aerial surveying is emerging as a tool that greatly improves the ease and efficiency of measuring landforms for quantitative geomorphic analyses. High-resolution, close-range photogrammetry produces dense, 3-dimensional point clouds that facilitate the construction of digital surface models, as well as a potential means of classifying ground targets using spatial structure. This study presents results from recent applications of UAS-based photogrammetry, including high resolution surface morphometry of a lava flow, repeat-pass applications to mass movements, and fault scarp degradation modeling. Depending upon the desired photographic resolution and the platform/payload flown, aerial photos are typically acquired at altitudes of 40 - 100 meters above the ground surface. In all cases, high-precision ground control points are key for accurate (and repeatable) orientation - relying on low-precision GPS coordinates (whether on the ground or geotags in the aerial photos) typically results in substantial rotations (tilt) of the reference frame. Using common ground control points between repeat surveys results in matching point clouds with RMS residuals better than 10 cm. In arid regions, the point cloud is used to assess lava flow surface roughness using multi-scale measurements of point cloud dimensionality. For the landslide study, the point cloud provides a basis for assessing possible displacements. In addition, the high resolution orthophotos facilitate mapping of fractures and their growth. For neotectonic applications, we compare fault scarp modeling results from UAV-derived point clouds versus field-based surveys (kinematic GPS and electronic distance measurements). In summary, there is a wide ranging toolbox of low-altitude aerial platforms becoming available for field geoscientists. In many instances, these tools will present convenience and reduced cost compared with the effort and expense to contract acquisitions of aerial imagery.

  12. SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark

    NASA Astrophysics Data System (ADS)

    Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.

    2017-05-01

    This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.

  13. Balloon borne Antarctic frost point measurements and their impact on polar stratospheric cloud theories

    NASA Technical Reports Server (NTRS)

    Rosen, James M.; Hofmann, D. J.; Carpenter, J. R.; Harder, J. W.; Oltmans, S. J.

    1988-01-01

    The first balloon-borne frost point measurements over Antarctica were made during September and October, 1987 as part of the NOZE 2 effort at McMurdo. The results indicate water vapor mixing ratios on the order of 2 ppmv in the 15 to 20 km region which is somewhat smaller than the typical values currently being used significantly smaller than the typical values currently being used in polar stratospheric cloud (PSC) theories. The observed water vapor mixing ratio would correspond to saturated conditions for what is thought to be the lowest stratospheric temperatures encountered over the Antarctic. Through the use of available lidar observations there appears to be significant evidence that some PSCs form at temperatures higher than the local frost point (with respect to water) in the 10 to 20 km region thus supporting the nitric acid theory of PSC composition. Clouds near 15 km and below appear to form in regions saturated with respect to water and thus are probably mostly ice water clouds although they could contain relatively small amounts of other constituents. Photographic evidence suggests that the clouds forming above the frost point probably have an appearance quite different from the lower altitude iridescent, colored nacreous clouds.

  14. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  15. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  16. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  17. Type-Dependent Responses of Ice Cloud Properties to Aerosols From Satellite Retrievals

    NASA Astrophysics Data System (ADS)

    Zhao, Bin; Gu, Yu; Liou, Kuo-Nan; Wang, Yuan; Liu, Xiaohong; Huang, Lei; Jiang, Jonathan H.; Su, Hui

    2018-04-01

    Aerosol-cloud interactions represent one of the largest uncertainties in external forcings on our climate system. Compared with liquid clouds, the observational evidence for the aerosol impact on ice clouds is much more limited and shows conflicting results, partly because the distinct features of different ice cloud and aerosol types were seldom considered. Using 9-year satellite retrievals, we find that, for convection-generated (anvil) ice clouds, cloud optical thickness, cloud thickness, and cloud fraction increase with small-to-moderate aerosol loadings (<0.3 aerosol optical depth) and decrease with further aerosol increase. For in situ formed ice clouds, however, these cloud properties increase monotonically and more sharply with aerosol loadings. An increase in loading of smoke aerosols generally reduces cloud optical thickness of convection-generated ice clouds, while the reverse is true for dust and anthropogenic pollution aerosols. These relationships between different cloud/aerosol types provide valuable constraints on the modeling assessment of aerosol-ice cloud radiative forcing.

  18. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain sizes. UAV closes the gap between aerial and terrestrial surveys in terms of resolution and acquisition flexibility. This is also true for the data accuracy. Considering these data collection and data quality properties of both systems they have their merit on its own in terms of scale, data quality, data collection speed and application.

  19. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  20. CLICK: The USGS Center for LIDAR Information Coordination & Knowledge

    USGS Publications Warehouse

    Menig, Jordan C.; Stoker, Jason M.

    2007-01-01

    While this technology has proven its use as a mapping tool - effective for generating bare earth DEMs at high resolutions (1-3 m) and with high vertical accuracies (15-18 cm) - obstacles remain for its application as a remote sensing tool: * The high cost of collecting LIDAR * The steep learning curve on research and application of using the entire point cloud * The challenges of discovering whether data exist for regions of interest

  1. Bipolar cloud-to-ground lightning flash observations

    NASA Astrophysics Data System (ADS)

    Saba, Marcelo M. F.; Schumann, Carina; Warner, Tom A.; Helsdon, John H.; Schulz, Wolfgang; Orville, Richard E.

    2013-10-01

    lightning is usually defined as a lightning flash where the current waveform exhibits a polarity reversal. There are very few reported cases of cloud-to-ground (CG) bipolar flashes using only one channel in the literature. Reports on this type of bipolar flashes are not common due to the fact that in order to confirm that currents of both polarities follow the same channel to the ground, one necessarily needs video records. This study presents five clear observations of single-channel bipolar CG flashes. High-speed video and electric field measurement observations are used and analyzed. Based on the video images obtained and based on previous observations of positive CG flashes with high-speed cameras, we suggest that positive leader branches which do not participate in the initial return stroke of a positive cloud-to-ground flash later generate recoil leaders whose negative ends, upon reaching the branch point, traverse the return stroke channel path to the ground resulting in a subsequent return stroke of opposite polarity.

  2. Femtosecond laser filament induced condensation and precipitation in a cloud chamber

    PubMed Central

    Ju, Jingjing; Liu, Jiansheng; Liang, Hong; Chen, Yu; Sun, Haiyi; Liu, Yonghong; Wang, Jingwei; Wang, Cheng; Wang, Tiejun; Li, Ruxin; Xu, Zhizhan; Chin, See Leang

    2016-01-01

    A unified picture of femtosecond laser induced precipitation in a cloud chamber is proposed. Among the three principal consequences of filamentation from the point of view of thermodynamics, namely, generation of chemicals, shock waves and thermal air flow motion (due to convection), the last one turns out to be the principal cause. Much of the filament induced chemicals would stick onto the existing background CCN’s (Cloud Condensation Nuclei) through collision making the latter more active. Strong mixing of air having a large temperature gradient would result in supersaturation in which the background CCN’s would grow efficiently into water/ice/snow. This conclusion was supported by two independent experiments using pure heating or a fan to imitate the laser-induced thermal effect or the strong air flow motion, respectively. Without the assistance of any shock wave and chemical CCN’s arising from laser filament, condensation and precipitation occurred. Meanwhile we believe that latent heat release during condensation /precipitation would enhance the air flow for mixing. PMID:27143227

  3. Design and testing of the navigation model for three axis stabilized earth oriented satellites applied to the ATS-6 satellite image data base

    NASA Technical Reports Server (NTRS)

    Kuhlow, W. W.; Chatters, G. C.

    1977-01-01

    An earth edge methodology has been developed to account for the relative attitude changes between successive ATS-6 images which allows reasonable high quality wind sets to be produced. The method consists of measuring the displacements of the right and left infrared earth edges between successive ATS-6 images as a function of scan line; from these measurements the attitude changes can be deduced and used to correct the apparent cloud displacement measurements. The wind data sets generated from ATS-6 using the earth-edge methodology were compared with those derived from the SMS-1 images (and model) covering the same time period. Quantitative comparisons for low level trade cumuli were made at interpolated uniformly spaced grid points and for selected individual comparison clouds. Selected individual comparison clouds, the root-mean-square differences for the U and V components were 1.0 and 1.2 meters per second with a maximum wind direction difference of 15 deg.

  4. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  5. New Perspectives of Point Clouds Color Management - the Development of Tool in Matlab for Applications in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Ackermann, S.; Fregonese, L.; Achille, C.

    2017-02-01

    The paper describes a method for Point Clouds Color management and Integration obtained from Terrestrial Laser Scanner (TLS) and Image Based (IB) survey techniques. Especially in the Cultural Heritage (CH) environment, methods and techniques to improve the color quality of Point Clouds have a key role because a homogenous texture brings to a more accurate reconstruction of the investigated object and to a more pleasant perception of the color object as well. A color management method for point clouds can be useful in case of single data set acquired by TLS or IB technique as well as in case of chromatic heterogeneity resulting by merging different datasets. The latter condition can occur when the scans are acquired in different moments of the same day or when scans of the same object are performed in a period of weeks or months, and consequently with a different environment/lighting condition. In this paper, a procedure to balance the point cloud color in order to uniform the different data sets, to improve the chromatic quality and to highlight further details will be presented and discussed.

  6. Interpretation of multi-wavelength-retrieved cloud droplet effective radii in terms of cloud vertical inhomogeneity based on water cloud simulations using a spectral-bin microphysics cloud model

    NASA Astrophysics Data System (ADS)

    Matsui, T. N.; Suzuki, K.; Nakajima, T. Y.; Matsumae, Y.

    2011-12-01

    Clouds play an import role in energy balance and climate changes of the Earth. IPCC AR4, however, pointed out that cloud feedback is still the large source of uncertainty in climate estimates. In the recent decade, the new satellites with the active instruments (e.g. Cloudsat) represented a new epoch in earth observations. The active remote sensing is powerful for illustrating the vertical structures of clouds, but the passive remote sensing from satellite images also contribute to better understating of cloud system. For instance, Nakajima et al. (2010a) and Suzuki et al. (2010) illustrated transition of cloud growth, from cloud droplet to drizzle to rain, using the combine analysis of the cloud droplet size retrieved from passive images (MODIS) and the reflectivity profiles from Cloudsat. Furthermore, EarthCARE that is a new satellite launched years later is composed of not only the active but also passive instruments for the combined analysis. On the other hands, the methods to retrieve the advanced information of cloud properties are also required because many imagers have been operated and are now planned (e.g. GCOM-C/SGLI), and have the advantages such as wide observation width and more observation channels. Cloud droplet effective radius (CDR) and cloud optical thickness (COT) can be retrieved using a non-water-absorbing band (e.g. 0.86μm) and a water-absorbing band (1.6, 2.1, 3.7μm) of imagers under the assumptions such as the log-normal droplet size distribution and the plane-parallel cloud structure. However, the differences between three retrieved CDRs using 1.6, 2.1 or 3.7μm (R16, R21 and R37) are found in the satellite observations. Several studies pointed out that vertical/horizontal inhomogeneity of cloud structure, difference of penetration depth of water-absorbing bands, multi-modal droplet distribution and/or 3-D radiative transfer effect cause the CDR differences. In other words, the advanced information of clouds may lie hidden in the differences. Nakajima et al. (2010b) investigated the impact of the differences sensitivities to particle size and the penetration depth in an attempt to explain the CDR differences found in by using a simple two-layer cloud model with the bi-modal size distribution functions. Their results showed the sensitivity differences between 1.6, 2.1 and 3.7μm bands to droplet sizes and their vertical stratification. In this study, we further investigate the impact of the vertical inhomogeneity structure including the drizzle by using a spectral-bin microphysics cloud model. We apply the 1-D radiative transfer computation to the numerical cloud fields generated by the cloud model, and retrieve the CDRs from the reflectances thus simulated at each band. We then compare the statistics of these retrieved CDRs with the CDRs obtained from MODIS observations and derive the sensitivity functions of the retrieved CDRs to the particle size and the optical depth from the sets of the droplet distribution functions predicted by the model and the retrieved CDRs. This study is an attempt to interpret the CDR differences in terms of the cloud vertical structure and the cloud particle growth processes.

  7. Microphysical Processes Affecting the Pinatubo Volcanic Plume

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia

    1996-01-01

    In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.

  8. Polarization Catastrophe Contributing to Rotation and Tornadic Motion in Cumulo-Nimbus Clouds

    NASA Astrophysics Data System (ADS)

    Handel, P. H.

    2007-05-01

    When the concentration of sub-micron ice particles in a cloud exceeds 2.5E21 per cubic cm, divided by the squared average number of water molecules per crystallite, the polarization catastrophe occurs. Then all ice crystallites nucleated on aerosol dust particles align their dipole moments in the same direction, and a large polarization vector field is generated in the cloud. Often this vector field has a radial component directed away from the vertical axis of the cloud. It is induced by the pre-existing electric field caused by the charged screening layers at the cloud surface, the screening shell of the cloud. The presence of a vertical component of the magnetic field of the earth creates a density of linear momentum G=DxB in the azimuthal direction, where D=eE+P is the electric displacement vector and e is the vacuum permittivity. This linear momentum density yields an angular momentum density vector directed upward in the nordic hemisphere, if the polarization vector points away from the vertical axis of the cloud. When the cloud becomes colloidally unstable, the crystallites grow beyond the size limit at which they still could carry a large ferroelectric saturation dipole moment, and the polarization vector quickly disappears. Then the cloud begins to rotate with an angular momentum that has the same direction. Due to the large average number of water molecules in a crystallite, the polarization catastrophe (PC) is present in practically all clouds, and is compensated by masking charges. In cumulo-nimbus (thunder-) clouds the collapse of the PC is rapid, and the masking charges lead to lightning, and in the upper atmosphere also to sprites, elves, and blue jets. In stratus clouds, however, the collapse is slow, and only leads to reverse polarity in dissipating clouds (minus on the bottom), as compared with growing clouds (plus on the bottom, because of the excess polarization charge). References: P.H. Handel: "Polarization Catastrophe Theory of Cloud Electricity", J. Geophysical Research 90, 5857-5863 (1985). P.H. Handel and P.B. James: "Polarization Catastrophe Model of Static Electrification and Spokes in the B-Ring of Saturn", Geophys. Res. Lett. 10, 1-4 (1983).

  9. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  10. Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images

    NASA Astrophysics Data System (ADS)

    Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.

    2017-05-01

    Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  11. D Building FAÇADE Reconstruction Using Handheld Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Sadeghi, F.; Arefi, H.; Fallah, A.; Hahn, M.

    2015-12-01

    3D The three dimensional building modelling has been an interesting topic of research for decades and it seems that photogrammetry methods provide the only economic means to acquire truly 3D city data. According to the enormous developments of 3D building reconstruction with several applications such as navigation system, location based services and urban planning, the need to consider the semantic features (such as windows and doors) becomes more essential than ever, and therefore, a 3D model of buildings as block is not any more sufficient. To reconstruct the façade elements completely, we employed the high density point cloud data that obtained from the handheld laser scanner. The advantage of the handheld laser scanner with capability of direct acquisition of very dense 3D point clouds is that there is no need to derive three dimensional data from multi images using structure from motion techniques. This paper presents a grammar-based algorithm for façade reconstruction using handheld laser scanner data. The proposed method is a combination of bottom-up (data driven) and top-down (model driven) methods in which, at first the façade basic elements are extracted in a bottom-up way and then they are served as pre-knowledge for further processing to complete models especially in occluded and incomplete areas. The first step of data driven modelling is using the conditional RANSAC (RANdom SAmple Consensus) algorithm to detect façade plane in point cloud data and remove noisy objects like trees, pedestrians, traffic signs and poles. Then, the façade planes are divided into three depth layers to detect protrusion, indentation and wall points using density histogram. Due to an inappropriate reflection of laser beams from glasses, the windows appear like holes in point cloud data and therefore, can be distinguished and extracted easily from point cloud comparing to the other façade elements. Next step, is rasterizing the indentation layer that holds the windows and doors information. After rasterization process, the morphological operators are applied in order to remove small irrelevant objects. Next, the horizontal splitting lines are employed to determine floors and vertical splitting lines are employed to detect walls, windows, and doors. The windows, doors and walls elements which are named as terminals are clustered during classification process. Each terminal contains a special property as width. Among terminals, windows and doors are named the geometry tiles in definition of the vocabularies of grammar rules. Higher order structures that inferred by grouping the tiles resulted in the production rules. The rules with three dimensional modelled façade elements constitute formal grammar that is named façade grammar. This grammar holds all the information that is necessary to reconstruct façades in the style of the given building. Thus, it can be used to improve and complete façade reconstruction in areas with no or limited sensor data. Finally, a 3D reconstructed façade model is generated that the accuracy of its geometry size and geometry position depends on the density of the raw point cloud.

  12. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data

    PubMed Central

    Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego

    2016-01-01

    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565

  13. A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data.

    PubMed

    Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego

    2016-12-23

    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).

  14. A service brokering and recommendation mechanism for better selecting cloud services.

    PubMed

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).

  15. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  16. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  17. 3D reconstruction of wooden member of ancient architecture from point clouds

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiju; Wang, Yanmin; Li, Deren; Zhao, Jun; Song, Daixue

    2006-10-01

    This paper presents a 3D reconstruction method to model wooden member of ancient architecture from point clouds based on improved deformable model. Three steps are taken to recover the shape of wooden member. Firstly, Hessian matrix is adopted to compute the axe of wooden member. Secondly, an initial model of wooden member is made by contour orthogonal to its axis. Thirdly, an accurate model is got through the coupling effect between the initial model and the point clouds of the wooden member according to the theory of improved deformable model. Every step and algorithm is studied and described in the paper. Using the point clouds captured from Forbidden City of China, shaft member and beam member are taken as examples to test the method proposed in the paper. Results show the efficiency and robustness of the method addressed in the literature to model the wooden member of ancient architecture.

  18. a Numerical Study of Close Approaches for a Cloud of Debris Considering Atmospheric Drag and Lift

    NASA Astrophysics Data System (ADS)

    Gomes, Vivian; Golebiewska, Justyna; Prado, Antonio

    The present paper study close approaches between a group of debris and a planet. The dynamical model considers the atmosphere of the planet, both in terms of drag as well as lift. This cloud is created during the passage of the spacecraft by the atmosphere of the planet, which is the responsible by the explosion of the spacecraft. The dynamical system is compos by the planet, the Sun, and the spacecraft, which explodes and becomes a cloud of debris. The planet and the Sun are in circular planar orbits. The equations of motion are the ones of the circular planar restricted three-body problem with the addition of the forces given by the atmospheric: drag and lift. The planet Jupiter is used for the numerical simulations. The initial conditions of the spacecraft and the debris are specified at the periapsis, which is the point where the explosion occurs. The equations of motion are numerically integrated forward in time for each particle, until a point where the particle is at a distance that can be considered far enough from the planet and it is possible to disregard the effects of the planet and consider the Sun-particle as a two-body system. Then we compute the velocity, energy and angular momentum after the passage by the planet, for each particle, based in the two-body celestial mechanics. From those results, the eccentricity and the semi-major axis of each particle can be obtained. Then, the orbit of the spacecraft is integrated backwards in time, as a single body. The difference from the usual close approaches technique is the presence of the atmosphere of the planet, which generates a drag and a lift forces in the spacecraft, which causes the explosion and modifies the trajectories of the debris generated by the explosion. The primary objective of the present paper is to map the modifications of the orbits of the debris that compose the cloud due to the close approach with the planet. Emphasis is given to map the orbital parameters of the debris after the close approach with the planet. Then, the effects are compared with the same maneuvers performed without the inclusion of the atmosphere. This type of research is useful, because it helps to obtain the size and density of the cloud of debris after the passage, as a function of time. That information has impact on the evaluations of the risks that spacecrafts suffer when passing by shorter distances from this cloud.

  19. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.

    2015-07-01

    As the rapid progress in the development of optoelectronic components and computational power, 3-D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This article proposed a new approach to measure tiny internal 3-D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3-D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.

  20. Global Forest Canopy Height Maps Validation and Calibration for The Potential of Forest Biomass Estimation in The Southern United States

    NASA Astrophysics Data System (ADS)

    Ku, N. W.; Popescu, S. C.

    2015-12-01

    In the past few years, three global forest canopy height maps have been released. Lefsky (2010) first utilized the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud and land Elevation Satellite (ICESat) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate a global forest canopy height map in 2010. Simard et al. (2011) integrated GLAS data and other ancillary variables, such as MODIS, Shuttle Radar Topography Mission (STRM), and climatic data, to generate another global forest canopy height map in 2011. Los et al. (2012) also used GLAS data to create a vegetation height map in 2012.Several studies attempted to compare these global height maps to other sources of data., Bolton et al. (2013) concluded that Simard's forest canopy height map has strong agreement with airborne lidar derived heights. Los map is a coarse spatial resolution vegetation height map with a 0.5 decimal degrees horizontal resolution, around 50 km in the US, which is not feasible for the purpose of our research. Thus, Simard's global forest canopy height map is the primary map for this research study. The main objectives of this research were to validate and calibrate Simard's map with airborne lidar data and other ancillary variables in the southern United States. The airborne lidar data was collected between 2010 and 2012 from: (1) NASA LiDAR, Hyperspectral & Thermal Image (G-LiHT) program; (2) National Ecological Observatory Network's (NEON) prototype data sharing program; (3) NSF Open Topography Facility; and (4) the Department of Ecosystem Science and Management at Texas A&M University. The airborne lidar study areas also cover a wide variety of vegetation types across the southern US. The airborne lidar data is post-processed to generate lidar-derived metrics and assigned to four different classes of point cloud data. The four classes of point cloud data are the data with ground points, above 1 m, above 3 m, and above 5 m. The root mean square error (RMSE) and coefficient of determination (R2) are used for examining the discrepancies of the canopy heights between the airborne lidar-derived metrics and global forest canopy height map, and the regression and random forest approaches are used to calibrate the global forest canopy height map. In summary, the research shows a calibrated forest canopy height map of the southern US.

  1. Localization of Pathology on Complex Architecture Building Surfaces

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.

    2017-02-01

    The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.

  2. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  3. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  4. Sediment Mobilization and Storage Dynamics of a Debris Flow Impacted Stream Channel using Multi-Temporal Structure from Motion Photogrammetry

    NASA Astrophysics Data System (ADS)

    Bailey, T. L.; Sutherland-Montoya, D.

    2015-12-01

    High resolution topographic analysis methods have become important tools in geomorphology. Structure from Motion photogrammetry offers a compelling vehicle for geomorphic change detection in fluvial environments. This process can produce arbitrarily high resolution, geographically registered spectral and topographic coverages from a collection of overlapping digital imagery from consumer cameras. Cuneo Creek has had three historically observed episodes of rapid aggradation (1955, 1964, and 1997). The debris flow deposits continue to be major sources of sediment sixty years after the initial slope failure. Previous studies have monitored the sediment storage volume and particle size since 1976 (in 1976, 1982, 1983, 1985, 1986, 1987, 1998, 2003). We reoccupied 3 previously surveyed stream cross sections on Sept 30, 2014 and March 30, 2015, and produced photogrammetric point clouds using a pole mounted camera with a remote view finder to take nadir view images from 4.3 meters above the channel bed. Ground control points were registered using survey grade GPS and typical cross sections used over 100 images to build the structure model. This process simultaneously collects channel geometry and we used it to also generate surface texture metrics, and produced DEMs with point cloud densities above 5000 points / m2. In the period between the surveys, a five year recurrence interval discharge of 20 m3/s scoured the channel. Surface particle size distribution has been determined for each observation period using image segmentation algorithms based on spectral distance and compactness. Topographic differencing between the point clouds shows substantial channel bed mobilization and reorganization. The net decline in sediment storage is in excess of 4 x 10^5 cubic meters since the 1964 aggradation peak, with associated coarsening of surface particle sizes. These new methods provide a promising rapid assessment tool for measurement of channel responses to sediment inputs.

  5. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  6. Failure and Redemption of Multifilter Rotating Shadowband Radiometer (MFRSR)/Normal Incidence Multifilter Radiometer (NIMFR) Cloud Screening: Contrasting Algorithm Performance at Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) and Southern Great Plains (SGP) Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassianov, Evgueni I.; Flynn, Connor J.; Koontz, Annette S.

    2013-09-11

    Well-known cloud-screening algorithms, which are designed to remove cloud-contaminated aerosol optical depths (AOD) from AOD measurements, have shown great performance at many middle-to-low latitude sites around the world. However, they may occasionally fail under challenging observational conditions, such as when the sun is low (near the horizon) or when optically thin clouds with small spatial inhomogeneity occur. Such conditions have been observed quite frequently at the high-latitude Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) sites. A slightly modified cloud-screening version of the standard algorithm is proposed here with a focus on the ARM-supported Multifilter Rotating Shadowband Radiometer (MFRSR)more » and Normal Incidence Multifilter Radiometer (NIMFR) data. The modified version uses approximately the same techniques as the standard algorithm, but it additionally examines the magnitude of the slant-path line of sight transmittance and eliminates points when the observed magnitude is below a specified threshold. Substantial improvement of the multi-year (1999-2012) aerosol product (AOD and its Angstrom exponent) is shown for the NSA sites when the modified version is applied. Moreover, this version reproduces the AOD product at the ARM Southern Great Plains (SGP) site, which was originally generated by the standard cloud-screening algorithms. The proposed minor modification is easy to implement and its application to existing and future cloud-screening algorithms can be particularly beneficial for challenging observational conditions.« less

  7. Chance Encounter with a Stratospheric Kerosene Rocket Plume From Russia Over California

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Wilson, J. C.; Ross, M. N.; Brock, C. A.; Sheridan, P. J.; Schoeberl, M. R.; Lait, L. R.; Bui, T. P.; Loewenstein, M.; Podolske, J. R.; hide

    2000-01-01

    A high-altitude aircraft flight on April 18, 1997 detected an enormous aerosol cloud at 20 km altitude near California (37 N). Not visually observed, the cloud had high concentrations of soot and sulfate aerosol, and was over 180 km in horizontal extent. The cloud was probably a large hydrocarbon fueled vehicle, most likely from rocket motors burning liquid oxygen and kerosene. One of two Russian Soyuz rockets could have produced the cloud: a launch from the Baikonur Cosmodrome, Kazakhstan on April 6; or from Plesetsk, Russia on April 9. Parcel trajectories and long-lived trace gas concentrations suggest the Baikonur launch as the cloud source. Cloud trajectories do not trace the Soyuz plume from Asia to North America, illustrating the uncertainties of point-to-point trajectories. This cloud encounter is the only stratospheric measurement of a hydrocarbon fuel powered rocket.

  8. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  9. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  10. Identifying opportune landing sites in degraded visual environments with terrain and cultural databases

    NASA Astrophysics Data System (ADS)

    Moody, Marc; Fisher, Robert; Little, J. Kristin

    2014-06-01

    Boeing has developed a degraded visual environment navigational aid that is flying on the Boeing AH-6 light attack helicopter. The navigational aid is a two dimensional software digital map underlay generated by the Boeing™ Geospatial Embedded Mapping Software (GEMS) and fully integrated with the operational flight program. The page format on the aircraft's multi function displays (MFD) is termed the Approach page. The existing work utilizes Digital Terrain Elevation Data (DTED) and OpenGL ES 2.0 graphics capabilities to compute the pertinent graphics underlay entirely on the graphics processor unit (GPU) within the AH-6 mission computer. The next release will incorporate cultural databases containing Digital Vertical Obstructions (DVO) to warn the crew of towers, buildings, and power lines when choosing an opportune landing site. Future IRAD will include Light Detection and Ranging (LIDAR) point cloud generating sensors to provide 2D and 3D synthetic vision on the final approach to the landing zone. Collision detection with respect to terrain, cultural, and point cloud datasets may be used to further augment the crew warning system. The techniques for creating the digital map underlay leverage the GPU almost entirely, making this solution viable on most embedded mission computing systems with an OpenGL ES 2.0 capable GPU. This paper focuses on the AH-6 crew interface process for determining a landing zone and flying the aircraft to it.

  11. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  12. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  13. Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Mizginov, V. A.

    2018-05-01

    Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.

  14. A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor.

    PubMed

    Madrigal, Carlos A; Branch, John W; Restrepo, Alejandro; Mery, Domingo

    2017-10-02

    Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.

  15. A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor

    PubMed Central

    Branch, John W.

    2017-01-01

    Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%. PMID:28974037

  16. Photogrammetric Point Clouds Generation in Urban Areas from Integrated Image Matching and Segmentation

    NASA Astrophysics Data System (ADS)

    Ye, L.; Wu, B.

    2017-09-01

    High-resolution imagery is an attractive option for surveying and mapping applications due to the advantages of high quality imaging, short revisit time, and lower cost. Automated reliable and dense image matching is essential for photogrammetric 3D data derivation. Such matching, in urban areas, however, is extremely difficult, owing to the complexity of urban textures and severe occlusion problems on the images caused by tall buildings. Aimed at exploiting high-resolution imagery for 3D urban modelling applications, this paper presents an integrated image matching and segmentation approach for reliable dense matching of high-resolution imagery in urban areas. The approach is based on the framework of our existing self-adaptive triangulation constrained image matching (SATM), but incorporates three novel aspects to tackle the image matching difficulties in urban areas: 1) occlusion filtering based on image segmentation, 2) segment-adaptive similarity correlation to reduce the similarity ambiguity, 3) improved dense matching propagation to provide more reliable matches in urban areas. Experimental analyses were conducted using aerial images of Vaihingen, Germany and high-resolution satellite images in Hong Kong. The photogrammetric point clouds were generated, from which digital surface models (DSMs) were derived. They were compared with the corresponding airborne laser scanning data and the DSMs generated from the Semi-Global matching (SGM) method. The experimental results show that the proposed approach is able to produce dense and reliable matches comparable to SGM in flat areas, while for densely built-up areas, the proposed method performs better than SGM. The proposed method offers an alternative solution for 3D surface reconstruction in urban areas.

  17. A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services

    PubMed Central

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937

  18. Extractive biodegradation and bioavailability assessment of phenanthrene in the cloud point system by Sphingomonas polyaromaticivorans.

    PubMed

    Pan, Tao; Deng, Tao; Zeng, Xinying; Dong, Wei; Yu, Shuijing

    2016-01-01

    The biological treatment of polycyclic aromatic hydrocarbons is an important issue. Most microbes have limited practical applications because of the poor bioavailability of polycyclic aromatic hydrocarbons. In this study, the extractive biodegradation of phenanthrene by Sphingomonas polyaromaticivorans was conducted by introducing the cloud point system. The cloud point system is composed of a mixture of (40 g/L) Brij 30 and Tergitol TMN-3, which are nonionic surfactants, in equal proportions. After phenanthrene degradation, a higher wet cell weight and lower phenanthrene residue were obtained in the cloud point system than that in the control system. According to the results of high-performance liquid chromatography, the residual phenanthrene preferred to partition from the dilute phase into the coacervate phase. The concentration of residual phenanthrene in the dilute phase (below 0.001 mg/L) is lower than its solubility in water (1.18 mg/L) after extractive biodegradation. Therefore, dilute phase detoxification was achieved, thus indicating that the dilute phase could be discharged without causing phenanthrene pollution. Bioavailability was assessed by introducing the apparent logP in the cloud point system. Apparent logP decreased significantly, thus indicating that the bioavailability of phenanthrene increased remarkably in the system. This study provides a potential application of biological treatment in water and soil contaminated by phenanthrene.

  19. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  20. Critical infrastructure monitoring using UAV imagery

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos

    2016-08-01

    The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.

  1. Terrestrial scanning or digital images in inventory of monumental objects? - case study

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Zawieska, D.

    2014-06-01

    Cultural heritage is the evidence of the past; monumental objects create the important part of the cultural heritage. Selection of a method to be applied depends on many factors, which include: the objectives of inventory, the object's volume, sumptuousness of architectural design, accessibility to the object, required terms and accuracy of works. The paper presents research and experimental works, which have been performed in the course of development of architectural documentation of elements of the external facades and interiors of the Wilanów Palace Museum in Warszawa. Point clouds, acquired from terrestrial laser scanning (Z+F 5003h) and digital images taken with Nikon D3X and Hasselblad H4D cameras were used. Advantages and disadvantages of utilisation of these technologies of measurements have been analysed with consideration of the influence of the structure and reflectance of investigated monumental surfaces on the quality of generation of photogrammetric products. The geometric quality of surfaces obtained from terrestrial laser scanning data and from point clouds resulting from digital images, have been compared.

  2. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  3. From structure from motion to historical building information modeling: populating a semantic-aware library of architectural elements

    NASA Astrophysics Data System (ADS)

    Santagati, Cettina; Lo Turco, Massimiliano

    2017-01-01

    In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semantic-aware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms.

  4. Determination of total selenium in food samples by d-CPE and HG-AFS.

    PubMed

    Wang, Mei; Zhong, Yizhou; Qin, Jinpeng; Zhang, Zehua; Li, Shan; Yang, Bingyi

    2017-07-15

    A dual-cloud point extraction (d-CPE) procedure was developed for the simultaneous preconcentration and determination of trace level Se in food samples by hydride generation-atomic fluorescence spectrometry (HG-AFS). The Se(IV) was complexed with ammonium pyrrolidinedithiocarbamate (APDC) in a Triton X-114 surfactant-rich phase, which was then treated with a mixture of 16% (v/v) HCl and 20% (v/v) H 2 O 2 . This converted the Se(IV)-APDC into free Se(IV), which was back extracted into an aqueous phase at the second cloud point extraction stage. This aqueous phase was analyzed directly by HG-AFS. Optimization of the experimental conditions gave a limit of detection of 0.023μgL -1 with an enhancement factor of 11.8 when 50mL of sample solution was preconcentrated to 3mL. The relative standard deviation was 4.04% (c=6.0μgL -1 , n=10). The proposed method was applied to determine the Se contents in twelve food samples with satisfactory recoveries of 95.6-105.2%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  6. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  7. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  8. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  9. Analysis of a jet stream induced gravity wave associated with an observed ice cloud over Greenland

    NASA Astrophysics Data System (ADS)

    Buss, S.; Hertzog, A.; Hostettler, C.; Bui, T. P.; Lüthi, T.; Wernli, H.

    2003-11-01

    A polar stratospheric ice cloud (PSC type II) was observed by airborne lidar above Greenland on 14 January 2000. Is was the unique observation of an ice cloud over Greenland during the SOLVE/THESEO 2000 campaign. Mesoscale simulations with the hydrostatic HRM model are presented which, in contrast to global analyses, are capable to produce a vertically propagating gravity wave that induces the low temperatures at the level of the PSC afforded for the ice formation. The simulated minimum temperature is ~8 K below the driving analyses and ~3 K below the frost point, exactly coinciding with the location of the observed ice cloud. Despite the high elevations of the Greenland orography the simulated gravity wave is not a mountain wave. Analyses of the horizontal wind divergence, of the background wind profiles, of backward gravity wave ray-tracing trajectories, of HRM experiments with reduced Greenland topography and of several instability diagnostics near the tropopause level provide consistent evidence that the wave is emitted by the geostrophic adjustment of a jet instability associated with an intense, rapidly evolving, anticyclonically curved jet stream. In order to evaluate the potential frequency of such non-orographic polar stratospheric cloud events, an approximate jet instability diagnostic is performed for the winter 1999/2000. It indicates that ice-PSCs are only occasionally generated by gravity waves emanating from an unstable jet.

  10. a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.

    2018-04-01

    Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.

  11. Rapid Topographic Mapping Using TLS and UAV in a Beach-dune-wetland Environment: Case Study in Freeport, Texas, USA

    NASA Astrophysics Data System (ADS)

    Ding, J.; Wang, G.; Xiong, L.; Zhou, X.; England, E.

    2017-12-01

    Coastal regions are naturally vulnerable to impact from long-term coastal erosion and episodic coastal hazards caused by extreme weather events. Major geomorphic changes can occur within a few hours during storms. Prediction of storm impact, costal planning and resilience observation after natural events all require accurate and up-to-date topographic maps of coastal morphology. Thus, the ability to conduct rapid and high-resolution-high-accuracy topographic mapping is of critical importance for long-term coastal management and rapid response after natural hazard events. Terrestrial laser scanning (TLS) techniques have been frequently applied to beach and dune erosion studies and post hazard responses. However, TLS surveying is relatively slow and costly for rapid surveying. Furthermore, TLS surveying unavoidably retains gray areas that cannot be reached by laser pulses, particularly in wetland areas where lack of direct access in most cases. Aerial mapping using photogrammetry from images taken by unmanned aerial vehicles (UAV) has become a new technique for rapid topographic mapping. UAV photogrammetry mapping techniques provide the ability to map coastal features quickly, safely, inexpensively, on short notice and with minimal impact. The primary products from photogrammetry are point clouds similar to the LiDAR point clouds. However, a large number of ground control points (ground truth) are essential for obtaining high-accuracy UAV maps. The ground control points are often obtained by GPS survey simultaneously with the TLS survey in the field. The GPS survey could be a slow and arduous process in the field. This study aims to develop methods for acquiring a huge number of ground control points from TLS survey and validating point clouds obtained from photogrammetry with the TLS point clouds. A Rigel VZ-2000 TLS scanner was used for developing laser point clouds and a DJI Phantom 4 Pro UAV was used for acquiring images. The aerial images were processed with the Photogrammetry mapping software Agisoft PhotoScan. A workflow for conducting rapid TLS and UAV survey in the field and integrating point clouds obtained from TLS and UAV surveying will be introduced. Key words: UAV photogrammetry, ground control points, TLS, coastal morphology, topographic mapping

  12. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.

    PubMed

    Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-03-28

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.

  13. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  14. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target

    PubMed Central

    Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-01-01

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323

  15. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.

  16. Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information

    NASA Astrophysics Data System (ADS)

    Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.

    2017-09-01

    Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  17. Cloud Point and Liquid-Liquid Equilibrium Behavior of Thermosensitive Polymer L61 and Salt Aqueous Two-Phase System.

    PubMed

    Rao, Wenwei; Wang, Yun; Han, Juan; Wang, Lei; Chen, Tong; Liu, Yan; Ni, Liang

    2015-06-25

    The cloud point of thermosensitive triblock polymer L61, poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide) (PEO-PPO-PEO), was determined in the presence of various electrolytes (K2HPO4, (NH4)3C6H5O7, and K3C6H5O7). The cloud point of L61 was lowered by the addition of electrolytes, and the cloud point of L61 decreased linearly with increasing electrolyte concentration. The efficacy of electrolytes on reducing cloud point followed the order: K3C6H5O7 > (NH4)3C6H5O7 > K2HPO4. With the increase in salt concentration, aqueous two-phase systems exhibited a phase inversion. In addition, increasing the temperature reduced the concentration of salt needed that could promote phase inversion. The phase diagrams and liquid-liquid equilibrium data of the L61-K2HPO4/(NH4)3C6H5O7/K3C6H5O7 aqueous two-phase systems (before the phase inversion but also after phase inversion) were determined at T = (25, 30, and 35) °C. Phase diagrams of aqueous two-phase systems were fitted to a four-parameter empirical nonlinear expression. Moreover, the slopes of the tie-lines and the area of two-phase region in the diagram have a tendency to rise with increasing temperature. The capacity of different salts to induce aqueous two-phase system formation was the same order as the ability of salts to reduce the cloud point.

  18. A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul

    2015-01-01

    A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.

  19. Individual tree detection in intact forest and degraded forest areas in the north region of Mato Grosso State, Brazilian Amazon

    NASA Astrophysics Data System (ADS)

    Santos, E. G.; Jorge, A.; Shimabukuro, Y. E.; Gasparini, K.

    2017-12-01

    The State of Mato Grosso - MT has the second largest area with degraded forest among the states of the Brazilian Legal Amazon. Land use and land cover change processes that occur in this region cause the loss of forest biomass, releasing greenhouse gases that contribute to the increase of temperature on earth. These degraded forest areas lose biomass according to the intensity and magnitude of the degradation type. The estimate of forest biomass, commonly performed by forest inventory through sample plots, shows high variance in degraded forest areas. Due to this variance and complexity of tropical forests, the aim of this work was to estimate forest biomass using LiDAR point clouds in three distinct forest areas: one degraded by fire, another by selective logging and one area of intact forest. The approach applied in these areas was the Individual Tree Detection (ITD). To isolate the trees, we generated Canopy Height Models (CHM) images, which are obtained by subtracting the Digital Elevation Model (MDE) and the Digital Terrain Model (MDT), created by the cloud of LiDAR points. The trees in the CHM images are isolated by an algorithm provided by the Quantitative Ecology research group at the School of Forestry at Northern Arizona University (SILVA, 2015). With these points, metrics were calculated for some areas, which were used in the model of biomass estimation. The methodology used in this work was expected to reduce the error in biomass estimate in the study area. The cloud points of the most representative trees were analyzed, and thus field data was correlated with the individual trees found by the proposed algorithm. In a pilot study, the proposed methodology was applied generating the individual tree metrics: total height and area of the crown. When correlating 339 isolated trees, an unsatisfactory R² was obtained, as heights found by the algorithm were lower than those obtained in the field, with an average difference of 2.43 m. This shows that the algorithm used to isolate trees in temperate areas did not obtained satisfactory results in the tropical forest of Mato Grosso State. Due to this, in future works two algorithms, one developed by Dalponte et al. (2015) and another by Li et al. (2012) will be used.

  20. Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.

    2018-01-01

    We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  1. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    PubMed

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  2. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  3. End-group-functionalized poly(N,N-diethylacrylamide) via free-radical chain transfer polymerization: Influence of sulfur oxidation and cyclodextrin on self-organization and cloud points in water

    PubMed Central

    Reinelt, Sebastian; Steinke, Daniel

    2014-01-01

    Summary In this work we report the synthesis of thermo-, oxidation- and cyclodextrin- (CD) responsive end-group-functionalized polymers, based on N,N-diethylacrylamide (DEAAm). In a classical free-radical chain transfer polymerization, using thiol-functionalized 4-alkylphenols, namely 3-(4-(1,1-dimethylethan-1-yl)phenoxy)propane-1-thiol and 3-(4-(2,4,4-trimethylpentan-2-yl)phenoxy)propane-1-thiol, poly(N,N-diethylacrylamide) (PDEAAm) with well-defined hydrophobic end-groups is obtained. These end-group-functionalized polymers show different cloud point values, depending on the degree of polymerization and the presence of randomly methylated β-cyclodextrin (RAMEB-CD). Additionally, the influence of the oxidation of the incorporated thioether linkages on the cloud point is investigated. The resulting hydrophilic sulfoxides show higher cloud point values for the lower critical solution temperature (LCST). A high degree of functionalization is supported by 1H NMR-, SEC-, FTIR- and MALDI–TOF measurements. PMID:24778720

  4. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  5. Research on cloud background infrared radiation simulation based on fractal and statistical data

    NASA Astrophysics Data System (ADS)

    Liu, Xingrun; Xu, Qingshan; Li, Xia; Wu, Kaifeng; Dong, Yanbing

    2018-02-01

    Cloud is an important natural phenomenon, and its radiation causes serious interference to infrared detector. Based on fractal and statistical data, a method is proposed to realize cloud background simulation, and cloud infrared radiation data field is assigned using satellite radiation data of cloud. A cloud infrared radiation simulation model is established using matlab, and it can generate cloud background infrared images for different cloud types (low cloud, middle cloud, and high cloud) in different months, bands and sensor zenith angles.

  6. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  7. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  8. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  9. Terrestrial laser scanning in monitoring of anthropogenic objects

    NASA Astrophysics Data System (ADS)

    Zaczek-Peplinska, Janina; Kowalska, Maria

    2017-12-01

    The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.

  10. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  11. Equatorial waves simulated by the NCAR community climate model

    NASA Technical Reports Server (NTRS)

    Cheng, Xinhua; Chen, Tsing-Chang

    1988-01-01

    The equatorial planetary waves simulated by the NCAR CCM1 general circulation model were investigated in terms of space-time spectral analysis (Kao, 1968; Hayashi, 1971, 1973) and energetic analysis (Hayashi, 1980). These analyses are particularly applied to grid-point data on latitude circles. In order to test some physical factors which may affect the generation of tropical transient planetary waves, three different model simulations with the CCM1 (the control, the no-mountain, and the no-cloud experiments) were analyzed.

  12. Graphics-based intelligent search and abstracting using Data Modeling

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.

    2002-11-01

    This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.

  13. Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira

    2008-03-01

    Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.

  14. Cloud Collaboration: Cloud-Based Instruction for Business Writing Class

    ERIC Educational Resources Information Center

    Lin, Charlie; Yu, Wei-Chieh Wayne; Wang, Jenny

    2014-01-01

    Cloud computing technologies, such as Google Docs, Adobe Creative Cloud, Dropbox, and Microsoft Windows Live, have become increasingly appreciated to the next generation digital learning tools. Cloud computing technologies encourage students' active engagement, collaboration, and participation in their learning, facilitate group work, and support…

  15. Generation of Classical DInSAR and PSI Ground Motion Maps on a Cloud Thematic Platform

    NASA Astrophysics Data System (ADS)

    Mora, Oscar; Ordoqui, Patrick; Romero, Laia

    2016-08-01

    This paper presents the experience of ALTAMIRA INFORMATION uploading InSAR (Synthetic Aperture Radar Interferometry) services in the Geohazard Exploitation Platform (GEP), supported by ESA. Two different processing chains are presented jointly with ground motion maps obtained from the cloud computing, one being DIAPASON for classical DInSAR and SPN (Stable Point Network) for PSI (Persistent Scatterer Interferometry) processing. The product obtained from DIAPASON is the interferometric phase related to ground motion (phase fringes from a SAR pair). SPN provides motion data (mean velocity and time series) on high-quality pixels from a stack of SAR images. DIAPASON is already implemented, and SPN is under development to be exploited with historical data coming from ERS-1/2 and ENVISAT satellites, and current acquisitions of SENTINEL-1 in SLC and TOPSAR modes.

  16. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  17. From One Pixel to One Earth: Building a Living Atlas in the Cloud to Analyze and Monitor Global Patterns

    NASA Astrophysics Data System (ADS)

    Moody, D.; Brumby, S. P.; Chartrand, R.; Franco, E.; Keisler, R.; Kelton, T.; Kontgis, C.; Mathis, M.; Raleigh, D.; Rudelis, X.; Skillman, S.; Warren, M. S.; Longbotham, N.

    2016-12-01

    The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Historical, multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes per year of high-resolution imagery with daily global coverage. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail. We have assembled all available satellite imagery from the USGS Landsat, NASA MODIS, and ESA Sentinel programs, as well as commercial PlanetScope and RapidEye imagery, and have analyzed over 2.8 quadrillion multispectral pixels. We leveraged the commercial cloud to generate a tiled, spatio-temporal mosaic of the Earth for fast iteration and development of new algorithms combining analysis techniques from remote sensing, machine learning, and scalable compute infrastructure. Our data platform enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis. We demonstrate our data platform capability by using the European Space Agency's (ESA) published 2006 and 2009 GlobCover 20+ category label maps to train and test a Land Cover Land Use (LCLU) classifier, and generate current self-consistent LCLU maps in Brazil. We train a standard classifier on 2006 GlobCover categories using temporal imagery stacks, and we validate our results on co-registered 2009 Globcover LCLU maps and 2009 imagery. We then extend the derived LCLU model to current imagery stacks to generate an updated, in-season label map. Changes in LCLU labels can now be seamlessly monitored for a given location across the years in order to track, for example, cropland expansion, forest growth, and urban developments. An example of change monitoring is illustrated in the included figure showing rainfed cropland change in the Mato Grosso region of Brazil between 2006 and 2009.

  18. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Luke, Edward

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement (ARM) site are used to classify cloud phase within a deep convective cloud in a shallow to deep convection transitional case. The cloud cannot be fully observed by a lidar due to signal attenuation. Thus we develop an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectramore » from vertically pointing Ka band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid, indicating complexity to how ice growth and diabatic heating occurs in the vertical structure of the cloud.« less

  19. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    NASA Astrophysics Data System (ADS)

    Riihimaki, L. D.; Comstock, J. M.; Luke, E.; Thorsen, T. J.; Fu, Q.

    2017-07-01

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.

  20. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    PubMed Central

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2014-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373

  1. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    NASA Astrophysics Data System (ADS)

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.

  2. Photogrammetric 3d Building Reconstruction from Thermal Images

    NASA Astrophysics Data System (ADS)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  3. Evolution of bubble clouds induced by pulsed cavitational ultrasound therapy - histotripsy.

    PubMed

    Xu, Zhen; Raghavan, M; Hall, T L; Mycek, M-A; Fowlkes, J B

    2008-05-01

    Mechanical tissue fractionation can be achieved using successive, high-intensity ultrasound pulses in a process termed histotripsy. Histotripsy has many potential clinical applications where noninvasive tissue removal is desired. The primary mechanism for histotripsy is believed to be cavitation. Using fast-gated imaging, this paper studies the evolution of a cavitating bubble cloud induced by a histotripsy pulse (10 and 14 cycles) at peak negative pressures exceeding 21MPa. Bubble clouds are generated inside a gelatin phantom and at a tissue-water interface, representing two situations encountered clinically. In both environments, the imaging results show that the bubble clouds share the same evolutionary trend. The bubble cloud and individual bubbles in the cloud were generated by the first cycle of the pulse, grew with each cycle during the pulse, and continued to grow and collapsed several hundred microseconds after the pulse. For example, the bubbles started under 10 microm, grew to 50 microm during the pulse, and continued to grow 100 microm after the pulse. The results also suggest that the bubble clouds generated in the two environments differ in growth and collapse duration, void fraction, shape, and size. This study furthers our understanding of the dynamics of bubble clouds induced by histotripsy.

  4. Sensor data fusion for textured reconstruction and virtual representation of alpine scenes

    NASA Astrophysics Data System (ADS)

    Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter

    2017-10-01

    The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.

  5. Quantification of skeletal fraction volume of a soil pit by means of photogrammetry

    NASA Astrophysics Data System (ADS)

    Baruck, Jasmin; Zieher, Thomas; Bremer, Magnus; Rutzinger, Martin; Geitner, Clemens

    2015-04-01

    The grain size distribution of a soil is a key parameter determining soil water behaviour, soil fertility and land use potential. It plays an important role in soil classification and allows drawing conclusions on landscape development as well as soil formation processes. However, fine soil material (i.e. particle diameter ≤2 mm) is usually documented more thoroughly than the skeletal fraction (i.e. particle diameter >2 mm). While fine soil material is commonly analysed in the laboratory in order to determine the soil type, the skeletal fraction is typically estimated in the field at the profile. For a more precise determination of the skeletal fraction other methods can be applied and combined. These methods can be volume-related (sampling rings, percussion coring tubes) or non-volume-related (sieve of spade excavation). In this study we present a framework for the quantification of skeletal fraction volumes of a soil pit by means of photogrammetry. As a first step 3D point clouds of both soil pit and skeletal grains were generated. Therefore all skeletal grains of the pit were spread out onto a plane, clean plastic sheet in the field and numerous digital photos were taken using a reflex camera. With the help of the open source tool VisualSFM (structure from motion) two scaled 3D point clouds were derived. As a second step the skeletal fraction point cloud was segmented by radiometric attributes in order to determine volumes of single skeletal grains. The comparison of the total skeletal fraction volume with the volume of the pit (closed by spline interpolation) yields an estimate of the volumetric proportion of skeletal grains. The presented framework therefore provides an objective reference value of skeletal fraction for the support of qualitative field records.

  6. Automated matching of multiple terrestrial laser scans for stem mapping without the use of artificial references

    NASA Astrophysics Data System (ADS)

    Liu, Jingbin; Liang, Xinlian; Hyyppä, Juha; Yu, Xiaowei; Lehtomäki, Matti; Pyörälä, Jiri; Zhu, Lingli; Wang, Yunsheng; Chen, Ruizhi

    2017-04-01

    Terrestrial laser scanning has been widely used to analyze the 3D structure of a forest in detail and to generate data at the level of a reference plot for forest inventories without destructive measurements. Multi-scan terrestrial laser scanning is more commonly applied to collect plot-level data so that all of the stems can be detected and analyzed. However, it is necessary to match the point clouds of multiple scans to yield a point cloud with automated processing. Mismatches between datasets will lead to errors during the processing of multi-scan data. Classic registration methods based on flat surfaces cannot be directly applied in forest environments; therefore, artificial reference objects have conventionally been used to assist with scan matching. The use of artificial references requires additional labor and expertise, as well as greatly increasing the cost. In this study, we present an automated processing method for plot-level stem mapping that matches multiple scans without artificial references. In contrast to previous studies, the registration method developed in this study exploits the natural geometric characteristics among a set of tree stems in a plot and combines the point clouds of multiple scans into a unified coordinate system. Integrating multiple scans improves the overall performance of stem mapping in terms of the correctness of tree detection, as well as the bias and the root-mean-square errors of forest attributes such as diameter at breast height and tree height. In addition, the automated processing method makes stem mapping more reliable and consistent among plots, reduces the costs associated with plot-based stem mapping, and enhances the efficiency.

  7. Three-Dimensional Recording of Bastion Middleburg Monument Using Terrestrial Laser Scanner

    NASA Astrophysics Data System (ADS)

    Majid, Z.; Lau, C. L.; Yusoff, A. R.

    2016-06-01

    This paper describes the use of terrestrial laser scanning for the full three-dimensional (3D) recording of historical monument, known as the Bastion Middleburg. The monument is located in Melaka, Malaysia, and was built by the Dutch in 1660. This monument serves as a major hub for the community when conducting commercial activities in estuaries Malacca and the Dutch build this monument as a control tower or fortress. The monument is located on the banks of the Malacca River was built between Stadhuys or better known as the Red House and Mill Quayside. The breakthrough fort on 25 November 2006 was a result of the National Heritage Department through in-depth research on the old map. The recording process begins with the placement of measuring targets at strategic locations around the monument. Spherical target was used in the point cloud data registration. The scanning process is carried out using a laser scanning system known as a terrestrial scanner Leica C10. This monument was scanned at seven scanning stations located surrounding the monument with medium scanning resolution mode. Images of the monument have also been captured using a digital camera that is setup in the scanner. For the purposes of proper registration process, the entire spherical target was scanned separately using a high scanning resolution mode. The point cloud data was pre-processed using Leica Cyclone software. The pre-processing process starting with the registration of seven scan data set through overlapping spherical targets. The post-process involved in the generation of coloured point cloud model of the monument using third-party software. The orthophoto of the monument was also produced. This research shows that the method of laser scanning provides an excellent solution for recording historical monuments with true scale of and texture.

  8. Genes2WordCloud: a quick way to identify biological themes from gene lists and free text.

    PubMed

    Baroukh, Caroline; Jenkins, Sherry L; Dannenfelser, Ruth; Ma'ayan, Avi

    2011-10-13

    Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications.

  9. Genes2WordCloud: a quick way to identify biological themes from gene lists and free text

    PubMed Central

    2011-01-01

    Background Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Results Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Methods Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Conclusions Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications. PMID:21995939

  10. Assessment of different models for computing the probability of a clear line of sight

    NASA Astrophysics Data System (ADS)

    Bojin, Sorin; Paulescu, Marius; Badescu, Viorel

    2017-12-01

    This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.

  11. The Taurus Spitzer Legacy Project

    NASA Astrophysics Data System (ADS)

    McCabe, Caer-Eve; Padgett, D. L.; Rebull, L.; Noriega-Crespo, A.; Carey, S.; Brooke, T.; Stapelfeldt, K. R.; Fukagawa, M.; Hines, D.; Terebey, S.; Huard, T.; Hillenbrand, L.; Guedel, M.; Audard, M.; Monin, J.; Guieu, S.; Knapp, G.; Evans, N. J., III; Menard, F.; Harvey, P.; Allen, L.; Wolf, S.; Skinner, S.; Strom, S.; Glauser, A.; Saavedra, C.; Koerner, D.; Myers, P.; Shupe, D.; Latter, W.; Grosso, N.; Heyer, M.; Dougados, C.; Bouvier, J.

    2009-01-01

    Without massive stars and dense stellar clusters, Taurus plays host to a distributed mode of low-mass star formation particularly amenable to observational and theoretical study. In 2005-2007, our team mapped the central 43 square degrees of the main Taurus clouds at wavelengths from 3.6 - 160 microns with the IRAC and MIPS cameras on the Spitzer Space Telescope. Together, these images form the largest contiguous Spitzer map of a single star-forming region (and any region outside the galactic plane). Our Legacy team has generated re-reduced mosaic images and source catalogs, available to the community via the Spitzer Science Center website http://ssc.spitzer.caltech.edu/legacy/all.html . This Spitzer survey is a central and crucial part of a multiwavelength study of the Taurus cloud complex that we have performed using XMM, CFHT, and the SDSS. The seven photometry data points from Spitzer allow us to characterize the circumstellar environment of each object, and, in conjunction with optical and NIR photometry, construct a complete luminosity function for the cloud members that will place constraints on the initial mass function. We present results drawing upon our catalog of several hundred thousand IRAC and thousands of MIPS sources. Initial results from our study of the Taurus clouds include new disks around brown dwarfs, new low luminosity YSO candidates, and new Herbig-Haro objects.

  12. Formation of massive, dense cores by cloud-cloud collisions

    NASA Astrophysics Data System (ADS)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-03-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  13. Formation of massive, dense cores by cloud-cloud collisions

    NASA Astrophysics Data System (ADS)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-05-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  14. Clouds off the Aleutian Islands

    NASA Image and Video Library

    2017-12-08

    March 23, 2010 - Clouds off the Aleutian Islands Interesting cloud patterns were visible over the Aleutian Islands in this image, captured by the MODIS on the Aqua satellite on March 14, 2010. Turbulence, caused by the wind passing over the highest points of the islands, is producing the pronounced eddies that swirl the clouds into a pattern called a vortex "street". In this image, the clouds have also aligned in parallel rows or streets. Cloud streets form when low-level winds move between and over obstacles causing the clouds to line up into rows (much like streets) that match the direction of the winds. At the point where the clouds first form streets, they're very narrow and well-defined. But as they age, they lose their definition, and begin to spread out and rejoin each other into a larger cloud mass. The Aleutians are a chain of islands that extend from Alaska toward the Kamchatka Peninsula in Russia. For more information related to this image go to: modis.gsfc.nasa.gov/gallery/individual.php?db_date=2010-0... For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html

  15. Standoff detection of bioaerosols over wide area using a newly developed sensor combining a cloud mapper and a spectrometric LIF lidar

    NASA Astrophysics Data System (ADS)

    Buteau, Sylvie; Simard, Jean-Robert; Roy, Gilles; Lahaie, Pierre; Nadeau, Denis; Mathieu, Pierre

    2013-10-01

    A standoff sensor called BioSense was developed to demonstrate the capacity to map, track and classify bioaerosol clouds from a distant range and over wide area. The concept of the system is based on a two steps dynamic surveillance: 1) cloud detection using an infrared (IR) scanning cloud mapper and 2) cloud classification based on a staring ultraviolet (UV) Laser Induced Fluorescence (LIF) interrogation. The system can be operated either in an automatic surveillance mode or using manual intervention. The automatic surveillance operation includes several steps: mission planning, sensor deployment, background monitoring, surveillance, cloud detection, classification and finally alarm generation based on the classification result. One of the main challenges is the classification step which relies on a spectrally resolved UV LIF signature library. The construction of this library relies currently on in-chamber releases of various materials that are simultaneously characterized with the standoff sensor and referenced with point sensors such as Aerodynamic Particle Sizer® (APS). The system was tested at three different locations in order to evaluate its capacity to operate in diverse types of surroundings and various environmental conditions. The system showed generally good performances even though the troubleshooting of the system was not completed before initiating the Test and Evaluation (T&E) process. The standoff system performances appeared to be highly dependent on the type of challenges, on the climatic conditions and on the period of day. The real-time results combined with the experience acquired during the 2012 T & E allowed to identify future ameliorations and investigation avenues.

  16. X-ray pulsars in nearby irregular galaxies

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    2018-01-01

    The Small Magellanic Cloud (SMC), Large Magellanic Cloud (LMC) and Irregular Galaxy IC 10 are valuable laboratories to study the physical, temporal and statistical properties of the X-ray pulsar population with multi-satellite observations, in order to probe fundamental physics. The known distance of these galaxies can help us easily categorize the luminosity of the pulsars and their age difference can be helpful for for studying the origin and evolution of compact objects. Therefore, a complete archive of 116 XMM-Newton PN, 151 Chandra (Advanced CCD Imaging Spectrometer) ACIS, and 952 RXTE PCA observations for the pulsars in the Small Magellanic Cloud (SMC) were collected and analyzed, along with 42 XMM-Newton and 30 Chandra observations for the Large Magellanic Cloud, spanning 1997-2014. From a sample of 67 SMC pulsars we generate a suite of products for each pulsar detection: spin period, flux, event list, high time-resolution light-curve, pulse-profile, periodogram, and X-ray spectrum. Combining all three satellites, I generated complete histories of the spin periods, pulse amplitudes, pulsed fractions and X-ray luminosities. Many of the pulsars show variations in pulse period due to the combination of orbital motion and accretion torques. Long-term spin-up/down trends are seen in 28/25 pulsars respectively, pointing to sustained transfer of mass and angular momentum to the neutron star on decadal timescales. The distributions of pulse detection and flux as functions of spin period provide interesting findings: mapping boundaries of accretion-driven X-ray luminosity, and showing that fast pulsars (P<10 s) are rarely detected, which yet are more prone to giant outbursts. In parallel we compare the observed pulse profiles to our general relativity (GR) model of X-ray emission in order to constrain the physical parameters of the pulsars.In addition, we conduct a search for optical counterparts to X-ray sources in the local dwarf galaxy IC 10 to form a comparison sample for Magellanic Cloud X-ray pulsars.

  17. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  18. Applications of 3D-EDGE Detection for ALS Point Cloud

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.

  19. Percolation analysis for cosmic web with discrete points

    NASA Astrophysics Data System (ADS)

    Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung

    2016-03-01

    Percolation analysis has long been used to quantify the connectivity of the cosmic web. Unlike most of the previous works using density field on grids, we have studied percolation analysis based on discrete points. Using a Friends-of-Friends (FoF) algorithm, we generate the S-bb relation, between the fractional mass of the largest connected group (S) and the FoF linking length (bb). We propose a new model, the Probability Cloud Cluster Expansion Theory (PCCET) to relate the S-bb relation with correlation functions. We show that the S-bb relation reflects a combination of all orders of correlation functions. We have studied the S-bb relation with simulation and find that the S-bb relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with Halo Abundance Matching (HAM), we have generated a mock galaxy catalogue. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalogue with the latest galaxy catalogue from SDSS DR12, we have found significant differences in their S-bb relations. This indicates that the mock catalogue cannot accurately recover higher order correlation functions than the two-point correlation function, which reveals the limit of HAM method.

  20. Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.; Alis, C.

    2016-06-01

    In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.

  1. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    PubMed

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  2. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  3. Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.

    2015-08-01

    Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.

  4. Application of Template Matching for Improving Classification of Urban Railroad Point Clouds

    PubMed Central

    Arastounia, Mostafa; Oude Elberink, Sander

    2016-01-01

    This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452

  5. Vertical stratification of forest canopy for segmentation of understory trees within small-footprint airborne LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun

    2017-08-01

    Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.

  6. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation

    NASA Astrophysics Data System (ADS)

    Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.

    2014-05-01

    Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.

  7. Unmanned aerial vehicle (UAV)-based monitoring of a landslide: Gallenzerkogel landslide (Ybbs-Lower Austria) case study.

    PubMed

    Eker, Remzi; Aydın, Abdurrahim; Hübl, Johannes

    2017-12-19

    In the present study, UAV-based monitoring of the Gallenzerkogel landslide (Ybbs, Lower Austria) was carried out by three flight missions. High-resolution digital elevation models (DEMs), orthophotos, and density point clouds were generated from UAV-based aerial photos via structure-from-motion (SfM). According to ground control points (GCPs), an average of 4 cm root mean square error (RMSE) was found for all models. In addition, light detection and ranging (LIDAR) data from 2009, representing the prefailure topography, was utilized as a digital terrain model (DTM) and digital surface model (DSM). First, the DEM of difference (DoD) between the first UAV flight data and the LIDAR-DTM was determined and according to the generated DoD deformation map, an elevation difference of between - 6.6 and 2 m was found. Over the landslide area, a total of 4380.1 m 3 of slope material had been eroded, while 297.4 m 3 of the material had accumulated within the most active part of the slope. In addition, 688.3 m 3 of the total eroded material had belonged to the road destroyed by the landslide. Because of the vegetation surrounding the landslide area, the Multiscale Model-to-Model Cloud Comparison (M3C2) algorithm was then applied to compare the first and second UAV flight data. After eliminating both the distance uncertainty values of higher than 15 cm and the nonsignificant changes, the M3C2 distance obtained was between - 2.5 and 2.5 m. Moreover, the high-resolution orthophoto generated by the third flight allowed visual monitoring of the ongoing control/stabilization work in the area.

  8. A classifying method analysis on the number of returns for given pulse of post-earthquake airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang

    2016-11-01

    Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.

  9. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.

  10. Diffuse cloud chemistry. [in interstellar matter

    NASA Technical Reports Server (NTRS)

    Van Dishoeck, Ewine F.; Black, John H.

    1988-01-01

    The current status of models of diffuse interstellar clouds is reviewed. A detailed comparison of recent gas-phase steady-state models shows that both the physical conditions and the molecular abundances in diffuse clouds are still not fully understood. Alternative mechanisms are discussed and observational tests which may discriminate between the various models are suggested. Recent developments regarding the velocity structure of diffuse clouds are mentioned. Similarities and differences between the chemistries in diffuse clouds and those in translucent and high latitude clouds are pointed out.

  11. The pointing errors of geosynchronous satellites

    NASA Technical Reports Server (NTRS)

    Sikdar, D. N.; Das, A.

    1971-01-01

    A study of the correlation between cloud motion and wind field was initiated. Cloud heights and displacements were being obtained from a ceilometer and movie pictures, while winds were measured from pilot balloon observations on a near-simultaneous basis. Cloud motion vectors were obtained from time-lapse cloud pictures, using the WINDCO program, for 27, 28 July, 1969, in the Atlantic. The relationship between observed features of cloud clusters and the ambient wind field derived from cloud trajectories on a wide range of space and time scales is discussed.

  12. Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination

    PubMed Central

    Park, Anjin; Lee, Byeong Ha; Eom, Joo Beom

    2017-01-01

    We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast. PMID:28714897

  13. Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data

    PubMed Central

    Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc

    2017-01-01

    Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189

  14. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  15. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    NASA Astrophysics Data System (ADS)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  16. Stability analysis of chalk sea cliffs using UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Barlow, John; Gilham, Jamie

    2017-04-01

    Cliff erosion and instability poses a significant hazard to communities and infrastructure located is coastal areas. We use point cloud and spectral data derived from close range digital photogrammetry to assess the stability of chalk sea cliffs located at Telscombe, UK. Data captured from an unmanned aerial vehicle (UAV) were used to generate dense point clouds for a 712 m section of cliff face which ranges from 20 to 49 m in height. Generated models fitted our ground control network within a standard error of 0.03 m. Structural features such as joints, bedding planes, and faults were manually mapped and are consistent with results from other studies that have been conducted using direct measurement in the field. Kinematic analysis of these data was used to identify the primary modes of failure at the site. Our results indicate that wedge failure is by far the most likely mode of slope instability. An analysis of sequential surveys taken from the summer of 2016 to the winter of 2017 indicate several large failures have occurred at the site. We establish the volume of failure through change detection between sequential data sets and use back analysis to determine the strength of shear surfaces for each failure. Our results show that data capture through UAV photogrammetry can provide useful information for slope stability analysis over long sections of cliff. The use of this technology offers significant benefits in equipment costs and field time over existing methods.

  17. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  18. Use of Assisted Photogrammetry for Indoor and Outdoor Navigation Purposes

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Cazzaniga, N. E.; Pinto, L.

    2015-05-01

    Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users' location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required. The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m. For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error.

  19. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  20. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops.

    PubMed

    Bengochea-Guevara, José M; Andújar, Dionisio; Sanchez-Sardana, Francisco L; Cantuña, Karla; Ribeiro, Angela

    2017-12-24

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, "on ground crop inspection" potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. "On ground monitoring" is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows.

Top