Sample records for point clouds acquired

  1. A portable low-cost 3D point cloud acquiring method based on structure light

    NASA Astrophysics Data System (ADS)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  2. Point clouds segmentation as base for as-built BIM creation

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2015-08-01

    In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.

  3. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  4. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  5. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  6. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  7. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  8. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  9. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  10. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  11. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  12. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  13. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  14. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  15. Low cost digital photogrammetry: From the extraction of point clouds by SFM technique to 3D mathematical modeling

    NASA Astrophysics Data System (ADS)

    Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito

    2017-07-01

    The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).

  16. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  17. New Perspectives of Point Clouds Color Management - the Development of Tool in Matlab for Applications in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Ackermann, S.; Fregonese, L.; Achille, C.

    2017-02-01

    The paper describes a method for Point Clouds Color management and Integration obtained from Terrestrial Laser Scanner (TLS) and Image Based (IB) survey techniques. Especially in the Cultural Heritage (CH) environment, methods and techniques to improve the color quality of Point Clouds have a key role because a homogenous texture brings to a more accurate reconstruction of the investigated object and to a more pleasant perception of the color object as well. A color management method for point clouds can be useful in case of single data set acquired by TLS or IB technique as well as in case of chromatic heterogeneity resulting by merging different datasets. The latter condition can occur when the scans are acquired in different moments of the same day or when scans of the same object are performed in a period of weeks or months, and consequently with a different environment/lighting condition. In this paper, a procedure to balance the point cloud color in order to uniform the different data sets, to improve the chromatic quality and to highlight further details will be presented and discussed.

  18. 3D point cloud analysis of structured light registration in computer-assisted navigation in spinal surgeries

    NASA Astrophysics Data System (ADS)

    Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.

  19. Study of Huizhou architecture component point cloud in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin

    2017-06-01

    Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.

  20. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain sizes. UAV closes the gap between aerial and terrestrial surveys in terms of resolution and acquisition flexibility. This is also true for the data accuracy. Considering these data collection and data quality properties of both systems they have their merit on its own in terms of scale, data quality, data collection speed and application.

  1. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  2. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  3. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally, by different commercial software tools, provides essential information for the performance validation of UAS technology.

  4. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  5. Smart Point Cloud: Definition and Remaining Challenges

    NASA Astrophysics Data System (ADS)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  6. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  7. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  8. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  9. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  10. Comparison of DSMs acquired by terrestrial laser scanning, UAV-based aerial images and ground-based optical images at the Super-Sauze landslide

    NASA Astrophysics Data System (ADS)

    Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred

    2013-04-01

    In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.

  11. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  12. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  13. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  14. Integrated Change Detection and Classification in Urban Areas Based on Airborne Laser Scanning Point Clouds.

    PubMed

    Tran, Thi Huong Giang; Ressl, Camillo; Pfeifer, Norbert

    2018-02-03

    This paper suggests a new approach for change detection (CD) in 3D point clouds. It combines classification and CD in one step using machine learning. The point cloud data of both epochs are merged for computing features of four types: features describing the point distribution, a feature relating to relative terrain elevation, features specific for the multi-target capability of laser scanning, and features combining the point clouds of both epochs to identify the change. All these features are merged in the points and then training samples are acquired to create the model for supervised classification, which is then applied to the whole study area. The final results reach an overall accuracy of over 90% for both epochs of eight classes: lost tree, new tree, lost building, new building, changed ground, unchanged building, unchanged tree, and unchanged ground.

  15. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  16. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  17. Classification of Mobile Laser Scanning Point Clouds from Height Features

    NASA Astrophysics Data System (ADS)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  18. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  19. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  20. Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes

    NASA Astrophysics Data System (ADS)

    Ozcan, O.

    2016-12-01

    Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.

  1. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less

  2. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-01-01

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747

  3. Mobile Laser Scanning along Dieppe coastal cliffs: reliability of the acquired point clouds applied to rockfall assessments

    NASA Astrophysics Data System (ADS)

    Michoud, Clément; Carrea, Dario; Augereau, Emmanuel; Cancouët, Romain; Costa, Stéphane; Davidson, Robert; Delacourt, Chirstophe; Derron, Marc-Henri; Jaboyedoff, Michel; Letortu, Pauline; Maquaire, Olivier

    2013-04-01

    Dieppe coastal cliffs, in Normandy, France, are mainly formed by sub-horizontal deposits of chalk and flintstone. Largely destabilized by an intense weathering and the Channel sea erosion, small and large rockfalls are regularly observed and contribute to retrogressive cliff processes. During autumn 2012, cliff and intertidal topographies have been acquired with a Terrestrial Laser Scanner (TLS) and a Mobile Laser Scanner (MLS), coupled with seafloor bathymetries realized with a multibeam echosounder (MBES). MLS is a recent development of laser scanning based on the same theoretical principles of aerial LiDAR, but using smaller, cheaper and portable devices. The MLS system, which is composed by an accurate dynamic positioning and orientation (INS) devices and a long range LiDAR, is mounted on a marine vessel; it is then possible to quickly acquire in motion georeferenced LiDAR point clouds with a resolution of about 15 cm. For example, it takes about 1 h to scan of shoreline of 2 km long. MLS is becoming a promising technique supporting erosion and rockfall assessments along the shores of lakes, fjords or seas. In this study, the MLS system used to acquire cliffs and intertidal areas of the Cap d'Ailly was composed by the INS Applanix POS-MV 320 V4 and the LiDAR Optech Ilirs LR. On the same day, three MLS scans with large overlaps (J1, J21 and J3) have been performed at ranges from 600 m at 4 knots (low tide) up to 200 m at 2.2 knots (up tide) with a calm sea at 2.5 Beaufort (small wavelets). Mean scan resolutions go from 26 cm for far scan (J1) to about 8.1 cm for close scan (J3). Moreover, one TLS point cloud on this test site has been acquired with a mean resolution of about 2.3 cm, using a Riegl LMS Z390i. In order to quantify the reliability of the methodology, comparisons between scans have been realized with the software Polyworks™, calculating shortest distances between points of one cloud and the interpolated surface of the reference point cloud. A MatLab™ routine was also written to extract interesting statistics. First, mean distances between points of the reference point clouds (J21) and its interpolated surface are about 0.35 cm with a standard deviation of 15 cm; errors introduced during the surface interpolation step, especially in vegetated areas, may explain those differences. Then, mean distances between J1's points (resp. J3) and the J21's reference surface are about 4 cm (resp. -17 cm) with a standard deviation of 53 cm (resp. 55 cm). After a best fit alignment of J1 and J3 on J21, mean distances between J1 (resp. J3) and the J21's reference surface decrease to about 0.15 cm (resp. 1.6 cm) with a standard deviation of 41 cm (resp. 21 cm). Finally, mean distances between the TLS point clouds and the J21's reference surface are about 3.2 cm with a standard deviation of 26 cm. In conclusion, MLS devices are able to quickly scan long shoreline with a resolution up to about 10 cm. The precision of the acquired data is relatively small enough to investigate on geomorphological features of coastal cliffs. The ability of the MLS technique to detect and monitor small and large rockfalls will be investigated thanks to new acquisitions of the Dieppe cliffs in a close future and enhanced adapted post-processing steps.

  4. Rapid Topographic Mapping Using TLS and UAV in a Beach-dune-wetland Environment: Case Study in Freeport, Texas, USA

    NASA Astrophysics Data System (ADS)

    Ding, J.; Wang, G.; Xiong, L.; Zhou, X.; England, E.

    2017-12-01

    Coastal regions are naturally vulnerable to impact from long-term coastal erosion and episodic coastal hazards caused by extreme weather events. Major geomorphic changes can occur within a few hours during storms. Prediction of storm impact, costal planning and resilience observation after natural events all require accurate and up-to-date topographic maps of coastal morphology. Thus, the ability to conduct rapid and high-resolution-high-accuracy topographic mapping is of critical importance for long-term coastal management and rapid response after natural hazard events. Terrestrial laser scanning (TLS) techniques have been frequently applied to beach and dune erosion studies and post hazard responses. However, TLS surveying is relatively slow and costly for rapid surveying. Furthermore, TLS surveying unavoidably retains gray areas that cannot be reached by laser pulses, particularly in wetland areas where lack of direct access in most cases. Aerial mapping using photogrammetry from images taken by unmanned aerial vehicles (UAV) has become a new technique for rapid topographic mapping. UAV photogrammetry mapping techniques provide the ability to map coastal features quickly, safely, inexpensively, on short notice and with minimal impact. The primary products from photogrammetry are point clouds similar to the LiDAR point clouds. However, a large number of ground control points (ground truth) are essential for obtaining high-accuracy UAV maps. The ground control points are often obtained by GPS survey simultaneously with the TLS survey in the field. The GPS survey could be a slow and arduous process in the field. This study aims to develop methods for acquiring a huge number of ground control points from TLS survey and validating point clouds obtained from photogrammetry with the TLS point clouds. A Rigel VZ-2000 TLS scanner was used for developing laser point clouds and a DJI Phantom 4 Pro UAV was used for acquiring images. The aerial images were processed with the Photogrammetry mapping software Agisoft PhotoScan. A workflow for conducting rapid TLS and UAV survey in the field and integrating point clouds obtained from TLS and UAV surveying will be introduced. Key words: UAV photogrammetry, ground control points, TLS, coastal morphology, topographic mapping

  5. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  6. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  7. Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software

    NASA Astrophysics Data System (ADS)

    Costantino, D.; Angelini, M. G.; Settembrini, F.

    2017-05-01

    The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.

  8. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  9. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  10. Achievable Rate Estimation of IEEE 802.11ad Visual Big-Data Uplink Access in Cloud-Enabled Surveillance Applications.

    PubMed

    Kim, Joongheon; Kim, Jong-Kook

    2016-01-01

    This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.

  11. Monitoring Aircraft Motion at Airports by LIDAR

    NASA Astrophysics Data System (ADS)

    Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.

    2016-06-01

    Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.

  12. Displacement fields from point cloud data: Application of particle imaging velocimetry to landslide geodesy

    USGS Publications Warehouse

    Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno

    2012-01-01

    Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.

  13. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.

    PubMed

    Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-03-28

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.

  14. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target

    PubMed Central

    Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-01-01

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323

  15. Pointo - a Low Cost Solution to Point Cloud Processing

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.

  16. MLS data segmentation using Point Cloud Library procedures. (Polish Title: Segmentacja danych MLS z użyciem procedur Point Cloud Library)

    NASA Astrophysics Data System (ADS)

    Grochocka, M.

    2013-12-01

    Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.

  17. PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera

    NASA Astrophysics Data System (ADS)

    Frohwein, Lynn J.; Heß, Mirco; Schlicher, Dominik; Bolwin, Konstantin; Büther, Florian; Jiang, Xiaoyi; Schäfers, Klaus P.

    2018-01-01

    PET attenuation correction for flexible MRI radio frequency surface coils in hybrid PET/MRI is still a challenging task, as position and shape of these coils conform to large inter-patient variabilities. The purpose of this feasibility study is to develop a novel method for the incorporation of attenuation information about flexible surface coils in PET reconstruction using the Microsoft Kinect V2 depth camera. The depth information is used to determine a dense point cloud of the coil’s surface representing the shape of the coil. From a CT template—acquired once in advance—surface information of the coil is extracted likewise and converted into a point cloud. The two point clouds are then registered using a combination of an iterative-closest-point (ICP) method and a partially rigid registration step. Using the transformation derived through the point clouds, the CT template is warped and thereby adapted to the PET/MRI scan setup. The transformed CT template is converted into an attenuation map from Hounsfield units into linear attenuation coefficients. The resulting fitted attenuation map is then integrated into the MRI-based patient-specific DIXON-based attenuation map of the actual PET/MRI scan. A reconstruction of phantom PET data acquired with the coil present in the field-of-view (FoV), but without the corresponding coil attenuation map, shows large artifacts in regions close to the coil. The overall count loss is determined to be around 13% compared to a PET scan without the coil present in the FoV. A reconstruction using the new μ-map resulted in strongly reduced artifacts as well as increased overall PET intensities with a remaining relative difference of about 1% to a PET scan without the coil in the FoV.

  18. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  19. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  20. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  1. Road traffic sign detection and classification from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng

    2016-03-01

    Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.

  2. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  3. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  4. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  5. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  6. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.

  7. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  8. Applications of low altitude photogrammetry for morphometry, displacements, and landform modeling

    NASA Astrophysics Data System (ADS)

    Gomez, F. G.; Polun, S. G.; Hickcox, K.; Miles, C.; Delisle, C.; Beem, J. R.

    2016-12-01

    Low-altitude aerial surveying is emerging as a tool that greatly improves the ease and efficiency of measuring landforms for quantitative geomorphic analyses. High-resolution, close-range photogrammetry produces dense, 3-dimensional point clouds that facilitate the construction of digital surface models, as well as a potential means of classifying ground targets using spatial structure. This study presents results from recent applications of UAS-based photogrammetry, including high resolution surface morphometry of a lava flow, repeat-pass applications to mass movements, and fault scarp degradation modeling. Depending upon the desired photographic resolution and the platform/payload flown, aerial photos are typically acquired at altitudes of 40 - 100 meters above the ground surface. In all cases, high-precision ground control points are key for accurate (and repeatable) orientation - relying on low-precision GPS coordinates (whether on the ground or geotags in the aerial photos) typically results in substantial rotations (tilt) of the reference frame. Using common ground control points between repeat surveys results in matching point clouds with RMS residuals better than 10 cm. In arid regions, the point cloud is used to assess lava flow surface roughness using multi-scale measurements of point cloud dimensionality. For the landslide study, the point cloud provides a basis for assessing possible displacements. In addition, the high resolution orthophotos facilitate mapping of fractures and their growth. For neotectonic applications, we compare fault scarp modeling results from UAV-derived point clouds versus field-based surveys (kinematic GPS and electronic distance measurements). In summary, there is a wide ranging toolbox of low-altitude aerial platforms becoming available for field geoscientists. In many instances, these tools will present convenience and reduced cost compared with the effort and expense to contract acquisitions of aerial imagery.

  9. SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark

    NASA Astrophysics Data System (ADS)

    Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.

    2017-05-01

    This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.

  10. Above-bottom biomass retrieval of aquatic plants with regression models and SfM data acquired by a UAV platform - A case study in Wild Duck Lake Wetland, Beijing, China

    NASA Astrophysics Data System (ADS)

    Jing, Ran; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Deng, Lei

    2017-12-01

    Above-bottom biomass (ABB) is considered as an important parameter for measuring the growth status of aquatic plants, and is of great significance for assessing health status of wetland ecosystems. In this study, Structure from Motion (SfM) technique was used to rebuild the study area with high overlapped images acquired by an unmanned aerial vehicle (UAV). We generated orthoimages and SfM dense point cloud data, from which vegetation indices (VIs) and SfM point cloud variables including average height (HAVG), standard deviation of height (HSD) and coefficient of variation of height (HCV) were extracted. These VIs and SfM point cloud variables could effectively characterize the growth status of aquatic plants, and thus they could be used to develop a simple linear regression model (SLR) and a stepwise linear regression model (SWL) with field measured ABB samples of aquatic plants. We also utilized a decision tree method to discriminate different types of aquatic plants. The experimental results indicated that (1) the SfM technique could effectively process high overlapped UAV images and thus be suitable for the reconstruction of fine texture feature of aquatic plant canopy structure; and (2) an SWL model based on point cloud variables: HAVG, HSD, HCV and two VIs: NGRDI, ExGR as independent variables has produced the best predictive result of ABB of aquatic plants in the study area, with a coefficient of determination of 0.84 and a relative root mean square error of 7.13%. In this analysis, a novel method for the quantitative inversion of a growth parameter (i.e., ABB) of aquatic plants in wetlands was demonstrated.

  11. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  12. Integration of Geodata in Documenting Castle Ruins

    NASA Astrophysics Data System (ADS)

    Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.

    2016-06-01

    Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.

  13. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  14. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    NASA Astrophysics Data System (ADS)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    LiDAR, also referred to as laser scanning, has proved to be an important tool for topographic data acquisition. Terrestrial laser scanning allows for accurate (several millimeter) and high resolution (several centimeter) data acquisition at distances of up to some hundred meters. By contrast, airborne laser scanning allows for acquiring homogeneous data for large areas, albeit with lower accuracy (decimeter) and resolution (some ten points per square meter) compared to terrestrial laser scanning. Hence, terrestrial laser scanning is preferably used for precise data acquisition of limited areas such as landslides or steep structures, while airborne laser scanning is well suited for the acquisition of topographic data of huge areas or even country wide. Laser scanners acquire more or less homogeneously distributed point clouds. These points represent natural objects like terrain and vegetation and artificial objects like buildings, streets or power lines. Typical products derived from such data are geometric models such as digital surface models representing all natural and artificial objects and digital terrain models representing the geomorphic topography only. As the LiDAR technology evolves, the amount of data produced increases almost exponentially even in smaller projects. This means a considerable challenge for the end user of the data: the experimenter has to have enough knowledge, experience and computer capacity in order to manage the acquired dataset and to derive geomorphologically relevant information from the raw or intermediate data products. Additionally, all this information might need to be integrated with other data like orthophotos. In all theses cases, in general, interactive interpretation is necessary to determine geomorphic structures from such models to achieve effective data reduction. There is little support for the automatic determination of characteristic features and their statistical evaluation. From the lessons learnt from automated extraction and modeling of buildings (Dorninger & Pfeifer, 2008) we expected that similar generalizations for geomorphic features can be achieved. Our aim is to recognize as many features as possible from the point cloud in the same processing loop, if they can be geometrically described with appropriate accuracy (e.g., as a plane). For this, we propose to apply a segmentation process allowing determining connected, planar structures within a surface represented by a point cloud. It is based on a robust determination of local tangential planes for all points acquired (Nothegger & Dorninger, 2009). It assumes that for points, belonging to a distinct planar structure, similar tangential planes can be determined. In passing, points acquired at continuous such as vegetation can be identified and eliminated. The plane parameters are used to define a four-dimensional feature space which is used to determine seed-clusters globally for the whole are of interest. Starting from these seeds, all points defining a connected, planar region are assigned to a segment. Due to the design of the algorithm, millions of input points can be processed with acceptable processing time on standard computer systems. This allows for processing geomorphically representative areas at once. For each segment, numerous parameter are derived which can be used for further exploitation. These are, for example, location, area, aspect, slope, and roughness. To prove the applicability of our method for automated geomorphic terrain analysis, we used terrestrial and airborne laser scanning data, acquired at two locations. The data of the Doren landslide located in Vorarlberg, Austria, was acquired by a terrestrial Riegl LS-321 laser scanner in 2008, by a terrestrial Riegl LMS-Z420i laser scanner in 2009, and additionally by three airborne LiDAR measurement campaigns, organized by the Landesvermessungsamt Vorarlberg, Feldkirch, in 2003, 2006, and 2007. The measurement distance of the terrestrial measurements was considerably varying considerably because of the various base points that were needed to cover the whole landslide. The resulting point spacing is approximately 20 cm. The achievable accuracy was about 10 cm. The airborne data was acquired with mean point densities of 2 points per square-meter. The accuracy of this dataset was about 15 cm. The second testing site is an area of the Leithagebirge in Burgenland, Austria. The data was acquired by an airborne Riegl LMS-Q560 laser scanner mounted on a helicopter. The mean point density was 6-8 points per square with an accuracy better than 10 cm. We applied our processing chain on the datasets individually. First, they were transformed to local reference frames and fine adjustments of the individual scans respectively flight strips were applied. Subsequently, the local regression planes were determined for each point of the point clouds and planar features were extracted by means of the proposed approach. It turned out that even small displacements can be detected if the number of points used for the fit is enough to define a parallel but somewhat displaced plane. Smaller cracks and erosional incisions do not disturb the plane fitting, because mostly they are filtered out as outliers. A comparison of the different campaigns of the Doren site showed exciting matches of the detected geomorphic structures. Although the geomorphic structure of the Leithagebirge differs from the Doren landslide, and the scales of the two studies were also different, reliable results were achieved in both cases. Additionally, the approach turned out to be highly robust against points which were not located on the terrain. Hence, no false positives were determined within the dense vegetation above the terrain, while it was possible to cover the investigated areas completely with reliable planes. In some cases, however, some structures in the tree crowns were also recognized, but these small patches could be very well sorted out from the geomorphically relevant results. Consequently, it could be verified that a topographic surface can be properly represented by a set of distinct planar structures. Therefore, the subsequent interpretation of those planes with respect to geomorphic characteristics is acceptable. The additional in situ geological measurements verified some of our findings in the sense that similar primary directions could be found that were derived from the LiDAR data set and (Zámolyi et al., 2010, this volume). References: P. Dorninger, N. Pfeifer: "A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds"; Sensors, 8 (2008), 11; 7323 - 7343. C. Nothegger, P. Dorninger: "3D Filtering of High-Resolution Terrestrial Laser Scanner Point Clouds for Cultural Heritage Documentation"; Photogrammetrie, Fernerkundung, Geoinformation, 1 (2009), 53 - 63. A. Zámolyi, B. Székely, G. Molnár, A. Roncat, P. Dorninger, A. Pocsai, M. Wyszyski, P. Drexel: "Comparison of LiDAR derived directional topographic features with geologic field evidence: a case study of Doren landslide (Vorarlberg, Austria)"; EGU General Assembly 2010, Vienna, Austria

  15. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    NASA Astrophysics Data System (ADS)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  16. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    NASA Astrophysics Data System (ADS)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  17. Multibeam 3D Underwater SLAM with Probabilistic Registration.

    PubMed

    Palomer, Albert; Ridao, Pere; Ribas, David

    2016-04-20

    This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.

  18. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  19. Integration of Point Clouds from Terrestrial Laser Scanning and Image-Based Matching for Generating High-Resolution Orthoimages

    NASA Astrophysics Data System (ADS)

    Salach, A.; Markiewicza, J. S.; Zawieska, D.

    2016-06-01

    An orthoimage is one of the basic photogrammetric products used for architectural documentation of historical objects; recently, it has become a standard in such work. Considering the increasing popularity of photogrammetric techniques applied in the cultural heritage domain, this research examines the two most popular measuring technologies: terrestrial laser scanning, and automatic processing of digital photographs. The basic objective of the performed works presented in this paper was to optimize the quality of generated high-resolution orthoimages using integration of data acquired by a Z+F 5006 terrestrial laser scanner and a Canon EOS 5D Mark II digital camera. The subject was one of the walls of the "Blue Chamber" of the Museum of King Jan III's Palace at Wilanów (Warsaw, Poland). The high-resolution images resulting from integration of the point clouds acquired by the different methods were analysed in detail with respect to geometric and radiometric correctness.

  20. Acquiring Semantically Meaningful Models for Robotic Localization, Mapping and Target Recognition

    DTIC Science & Technology

    2014-12-21

    information, including suggesstions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215...Representations • Point features tracking • Recovery of relative motion, visual odometry • Loop closure • Environment models, sparse clouds of points...that co- occur with the object of interest Chair-Background Table-Background Object Level Segmentation Jaccard Index Silber .[5] 15.12 RenFox[4

  1. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    NASA Astrophysics Data System (ADS)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.

  2. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds

    PubMed Central

    Sawicki, Piotr

    2018-01-01

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679

  3. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds.

    PubMed

    Gabara, Grzegorz; Sawicki, Piotr

    2018-03-06

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.

  4. Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory

    NASA Astrophysics Data System (ADS)

    Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.

    2017-09-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  5. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.

  6. The Complex Point Cloud for the Knowledge of the Architectural Heritage. Some Experiences

    NASA Astrophysics Data System (ADS)

    Aveta, C.; Salvatori, M.; Vitelli, G. P.

    2017-05-01

    The present paper aims to present a series of experiences and experimentations that a group of PhD from the University of Naples Federico II conducted over the past decade. This work has concerned the survey and the graphic restitution of monuments and works of art, finalized to their conservation. The targeted query of complex point cloud acquired by 3D scanners, integrated with photo sensors and thermal imaging, has allowed to explore new possibilities of investigation. In particular, we will present the scientific results of the experiments carried out on some important historical artifacts with distinct morphological and typological characteristics. According to aims and needs that emerged during the connotative process, with the support of archival and iconographic historical research, the laser scanner technology has been used in many different ways. New forms of representation, obtained directly from the point cloud, have been tested for the elaboration of thematic studies for documenting the pathologies and the decay of materials, for correlating visible aspects with invisible aspects of the artifact.

  7. Automatic co-registration of 3D multi-sensor point clouds

    NASA Astrophysics Data System (ADS)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  8. Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira

    2008-03-01

    Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.

  9. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  10. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  11. Considerations for achieving cross-platform point cloud data fusion across different dryland ecosystem structural states

    USDA-ARS?s Scientific Manuscript database

    Dryland ecosystems undergo long periods of senescence punctuated by rapid growth following seasonal precipitation events. Remote sensing of vegetation dynamics which capture new growth as well as herbivory and disturbance require both high spatial and temporal resolution data acquired by various op...

  12. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  13. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  14. Vertical stratification of forest canopy for segmentation of understory trees within small-footprint airborne LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun

    2017-08-01

    Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.

  15. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  16. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  17. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information for each modeled voxel and interpolated vertices that can be a useful attributes for clustering during data treatment. We thus illustrate such applications to the Rochefort cave by using both sources of 3D information to quantify the orientation of inaccessible geological structures (e.g. faults, tectonic and gravitational joints, and sediments bedding), cluster these structures using color information gathered from UAV's 3D point cloud and compare these data to structural data surveyed on the field. An additional drone photoscan was also conducted in the surface sinkhole giving access to the surveyed underground cavity to seek geological bodies' connections.

  18. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  19. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  20. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  1. Low cost sensing of vegetation volume and structure with a Microsoft Kinect sensor

    NASA Astrophysics Data System (ADS)

    Azzari, G.; Goulden, M.

    2011-12-01

    The market for videogames and digital entertainment has decreased the cost of advanced technology to affordable levels. The Microsoft Kinect sensor for Xbox 360 is an infrared time of flight camera designed to track body position and movement at a single-articulation level. Using open source drivers and libraries, we acquired point clouds of vegetation directly from the Kinect sensor. The data were filtered for outliers, co-registered, and cropped to isolate the plant of interest from the surroundings and soil. The volume of single plants was then estimated with several techniques, including fitting with solid shapes (cylinders, spheres, boxes), voxel counts, and 3D convex/concave hulls. Preliminary results are presented here. The volume of a series of wild artichoke plants was measured from nadir using a Kinect on a 3m-tall tower. The calculated volumes were compared with harvested biomass; comparisons and derived allometric relations will be presented, along with examples of the acquired point clouds. This Kinect sensor shows promise for ground-based, automated, biomass measurement systems, and possibly for comparison/validation of remotely sensed LIDAR.

  2. Accuracy Analysis of a Dam Model from Drone Surveys

    PubMed Central

    Buffi, Giulia; Venturi, Sara

    2017-01-01

    This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185

  3. Accuracy Analysis of a Dam Model from Drone Surveys.

    PubMed

    Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio

    2017-08-03

    This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.

  4. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  5. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  6. Control methods for merging ALSM and ground-based laser point clouds acquired under forest canopies

    NASA Astrophysics Data System (ADS)

    Slatton, Kenneth C.; Coleman, Matt; Carter, William E.; Shrestha, Ramesh L.; Sartori, Michael

    2004-12-01

    Merging of point data acquired from ground-based and airborne scanning laser rangers has been demonstrated for cases in which a common set of targets can be readily located in both data sets. However, direct merging of point data was not generally possible if the two data sets did not share common targets. This is often the case for ranging measurements acquired in forest canopies, where airborne systems image the canopy crowns well, but receive a relatively sparse set of points from the ground and understory. Conversely, ground-based scans of the understory do not generally sample the upper canopy. An experiment was conducted to establish a viable procedure for acquiring and georeferencing laser ranging data underneath a forest canopy. Once georeferenced, the ground-based data points can be merged with airborne points even in cases where no natural targets are common to both data sets. Two ground-based laser scans are merged and georeferenced with a final absolute error in the target locations of less than 10cm. This is comparable to the accuracy of the georeferenced airborne data. Thus, merging of the georeferenced ground-based and airborne data should be feasible. The motivation for this investigation is to facilitate a thorough characterization of airborne laser ranging phenomenology over forested terrain as a function of vertical location in the canopy.

  7. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.

  8. Learned Compact Local Feature Descriptor for Tls-Based Geodetic Monitoring of Natural Outdoor Scenes

    NASA Astrophysics Data System (ADS)

    Gojcic, Z.; Zhou, C.; Wieser, A.

    2018-05-01

    The advantages of terrestrial laser scanning (TLS) for geodetic monitoring of man-made and natural objects are not yet fully exploited. Herein we address one of the open challenges by proposing feature-based methods for identification of corresponding points in point clouds of two or more epochs. We propose a learned compact feature descriptor tailored for point clouds of natural outdoor scenes obtained using TLS. We evaluate our method both on a benchmark data set and on a specially acquired outdoor dataset resembling a simplified monitoring scenario where we successfully estimate 3D displacement vectors of a rock that has been displaced between the scans. We show that the proposed descriptor has the capacity to generalize to unseen data and achieves state-of-the-art performance while being time efficient at the matching step due the low dimension.

  9. Feature Relevance Assessment of Multispectral Airborne LIDAR Data for Tree Species Classification

    NASA Astrophysics Data System (ADS)

    Amiri, N.; Heurich, M.; Krzystek, P.; Skidmore, A. K.

    2018-04-01

    The presented experiment investigates the potential of Multispectral Laser Scanning (MLS) point clouds for single tree species classification. The basic idea is to simulate a MLS sensor by combining two different Lidar sensors providing three different wavelngthes. The available data were acquired in the summer 2016 at the same date in a leaf-on condition with an average point density of 37 points/m2. For the purpose of classification, we segmented the combined 3D point clouds consisiting of three different spectral channels into 3D clusters using Normalized Cut segmentation approach. Then, we extracted four group of features from the 3D point cloud space. Once a varity of features has been extracted, we applied forward stepwise feature selection in order to reduce the number of irrelevant or redundant features. For the classification, we used multinomial logestic regression with L1 regularization. Our study is conducted using 586 ground measured single trees from 20 sample plots in the Bavarian Forest National Park, in Germany. Due to lack of reference data for some rare species, we focused on four classes of species. The results show an improvement between 4-10 pp for the tree species classification by using MLS data in comparison to a single wavelength based approach. A cross validated (15-fold) accuracy of 0.75 can be achieved when all feature sets from three different spectral channels are used. Our results cleary indicates that the use of MLS point clouds has great potential to improve detailed forest species mapping.

  10. Analysis, Thematic Maps and Data Mining from Point Cloud to Ontology for Software Development

    NASA Astrophysics Data System (ADS)

    Nespeca, R.; De Luca, L.

    2016-06-01

    The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based) are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.

  11. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  12. Min-Cut Based Segmentation of Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.

    2012-07-01

    Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.

  13. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  14. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  15. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation.

    PubMed

    Wahabzada, Mirwaes; Paulus, Stefan; Kersting, Kristian; Mahlein, Anne-Katrin

    2015-08-08

    Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated - even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.

  16. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras

    NASA Astrophysics Data System (ADS)

    Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd

    2017-10-01

    This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.

  17. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  18. 3D change detection in staggered voxels model for robotic sensing and navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.

    2016-05-01

    3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.

  19. Gridless, pattern-driven point cloud completion and extension

    NASA Astrophysics Data System (ADS)

    Gravey, Mathieu; Mariethoz, Gregoire

    2016-04-01

    While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).

  20. Assessing and Correcting Topographic Effects on Forest Canopy Height Retrieval Using Airborne LiDAR Data

    PubMed Central

    Duan, Zhugeng; Zhao, Dan; Zeng, Yuan; Zhao, Yujin; Wu, Bingfang; Zhu, Jianjun

    2015-01-01

    Topography affects forest canopy height retrieval based on airborne Light Detection and Ranging (LiDAR) data a lot. This paper proposes a method for correcting deviations caused by topography based on individual tree crown segmentation. The point cloud of an individual tree was extracted according to crown boundaries of isolated individual trees from digital orthophoto maps (DOMs). Normalized canopy height was calculated by subtracting the elevation of centres of gravity from the elevation of point cloud. First, individual tree crown boundaries are obtained by carrying out segmentation on the DOM. Second, point clouds of the individual trees are extracted based on the boundaries. Third, precise DEM is derived from the point cloud which is classified by a multi-scale curvature classification algorithm. Finally, a height weighted correction method is applied to correct the topological effects. The method is applied to LiDAR data acquired in South China, and its effectiveness is tested using 41 field survey plots. The results show that the terrain impacts the canopy height of individual trees in that the downslope side of the tree trunk is elevated and the upslope side is depressed. This further affects the extraction of the location and crown of individual trees. A strong correlation was detected between the slope gradient and the proportions of returns with height differences more than 0.3, 0.5 and 0.8 m in the total returns, with coefficient of determination R2 of 0.83, 0.76, and 0.60 (n = 41), respectively. PMID:26016907

  1. Probabilistic change mapping from airborne LiDAR for post-disaster damage assessment

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Runyon, S. C.; Kruse, F. A.

    2013-12-01

    When both pre- and post-event LiDAR point clouds are available, change detection can be performed to identify areas that were most affected by a disaster event, and to obtain a map of quantitative changes in terms of height differences. In the case of earthquakes in built-up areas for instance, first responders can use a LiDAR change map to help prioritize search and recovery efforts. The main challenge consists of producing reliable change maps, robust to collection conditions, free of processing artifacts (due for instance to triangulation or gridding), and taking into account the various sources of uncertainty. Indeed, datasets acquired within a few years interval are often of different point density (sometimes an order of magnitude higher for recent data), different acquisition geometries, and very likely suffer from georeferencing errors and geometric discrepancies. All these differences might not be important for producing maps from each dataset separately, but they are crucial when performing change detection. We have developed a novel technique for the estimation of uncertainty maps from the LiDAR point clouds, using Bayesian inference, treating all variables as random. The main principle is to grid all points on a common grid before attempting any comparison, as working directly with point clouds is cumbersome and time consuming. A non-parametric approach based on local linear regression was implemented, assuming a locally linear model for the surface. This enabled us to derive error bars on gridded elevations, and then elevation differences. In this way, a map of statistically significant changes could be computed - whereas a deterministic approach would not allow testing of the significance of differences between the two datasets. This approach allowed us to take into account not only the observation noise (due to ranging, position and attitude errors) but also the intrinsic roughness of the observed surfaces occurring when scanning vegetation. As only elevation differences above a predefined noise level are accounted for (according to a specified confidence interval related to the allowable false alarm rate) the change detection is robust to all these sources of noise. To first validate the approach, we built small-scale models and scanned them using a terrestrial laser scanner to establish 'ground truth'. Changes were manually applied to the models then new scans were performed and analyzed. Additionally, two airborne datasets of the Monterey Peninsula, California, were processed and analyzed. The first one was acquired during 2010 (with relatively low point density, 1-3 pts/m2), and the second one was acquired during 2012 (with up to 30 pts/m2). To perform the comparison, a new point cloud registration technique was developed and the data were registered to a common 1 m grid. The goal was to correct systematic shifts due to GPS and INS errors, and focus on the actual height differences regardless of the absolute planimetric accuracy of the datasets. Though no major disaster event occurred between the two acquisition dates, sparse changes were detected and interpreted mostly as construction and natural landscape evolution.

  2. Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.

    2012-07-01

    The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.

  3. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  4. Topographic lidar survey of the Chandeleur Islands, Louisiana, February 6, 2012

    USGS Publications Warehouse

    Guy, Kristy K.; Plant, Nathaniel G.; Bonisteel-Cormier, Jamie M.

    2014-01-01

    This Data Series Report contains lidar elevation data collected February 6, 2012, for Chandeleur Islands, Louisiana. Point cloud data in lidar data exchange format (LAS) and bare earth digital elevation models (DEMs) in ERDAS Imagine raster format (IMG) are available as downloadable files. The point cloud data—data points described in three dimensions—were processed to extract bare earth data; therefore, the point cloud data are organized into the following classes: 1– and 17–unclassified, 2–ground, 9–water, and 10–breakline proximity. Digital Aerial Solutions, LLC, (DAS) was contracted by the U.S. Geological Survey (USGS) to collect and process these data. The lidar data were acquired at a horizontal spacing (or nominal pulse spacing) of 0.5 meters (m) or less. The USGS conducted two ground surveys in small areas on the Chandeleur Islands on February 5, 2012. DAS calculated a root mean square error (RMSEz) of 0.034 m by comparing the USGS ground survey point data to triangulated irregular network (TIN) models built from the lidar elevation data. This lidar survey was conducted to document the topography and topographic change of the Chandeleur Islands. The survey supports detailed studies of Louisiana, Mississippi and Alabama barrier islands that resolve annual and episodic changes in beaches, berms and dunes associated with processes driven by storms, sea-level rise, and even human restoration activities. These lidar data are available to Federal, State and local governments, emergency-response officials, resource managers, and the general public.

  5. Point Cloud Generation from sUAS-Mounted iPhone Imagery: Performance Analysis

    NASA Astrophysics Data System (ADS)

    Ladai, A. D.; Miller, J.

    2014-11-01

    The rapidly growing use of sUAS technology and fast sensor developments continuously inspire mapping professionals to experiment with low-cost airborne systems. Smartphones has all the sensors used in modern airborne surveying systems, including GPS, IMU, camera, etc. Of course, the performance level of the sensors differs by orders, yet it is intriguing to assess the potential of using inexpensive sensors installed on sUAS systems for topographic applications. This paper focuses on the quality analysis of point clouds generated based on overlapping images acquired by an iPhone 5s mounted on a sUAS platform. To support the investigation, test data was acquired over an area with complex topography and varying vegetation. In addition, extensive ground control, including GCPs and transects were collected with GSP and traditional geodetic surveying methods. The statistical and visual analysis is based on a comparison of the UAS data and reference dataset. The results with the evaluation provide a realistic measure of data acquisition system performance. The paper also gives a recommendation for data processing workflow to achieve the best quality of the final products: the digital terrain model and orthophoto mosaic. After a successful data collection the main question is always the reliability and the accuracy of the georeferenced data.

  6. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation.

    PubMed

    Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I

    2017-07-01

    Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.

  7. a Method of 3d Measurement and Reconstruction for Cultural Relics in Museums

    NASA Astrophysics Data System (ADS)

    Zheng, S.; Zhou, Y.; Huang, R.; Zhou, L.; Xu, X.; Wang, C.

    2012-07-01

    Three-dimensional measurement and reconstruction during conservation and restoration of cultural relics have become an essential part of a modem museum regular work. Although many kinds of methods including laser scanning, computer vision and close-range photogrammetry have been put forward, but problems still exist, such as contradiction between cost and good result, time and fine effect. Aimed at these problems, this paper proposed a structure-light based method for 3D measurement and reconstruction of cultural relics in museums. Firstly, based on structure-light principle, digitalization hardware has been built and with its help, dense point cloud of cultural relics' surface can be easily acquired. To produce accurate 3D geometry model from point cloud data, multi processing algorithms have been developed and corresponding software has been implemented whose functions include blunder detection and removal, point cloud alignment and merge, 3D mesh construction and simplification. Finally, high-resolution images are captured and the alignment of these images and 3D geometry model is conducted and realistic, accurate 3D model is constructed. Based on such method, a complete system including hardware and software are built. Multi-kinds of cultural relics have been used to test this method and results prove its own feature such as high efficiency, high accuracy, easy operation and so on.

  8. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    NASA Astrophysics Data System (ADS)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  9. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    PubMed Central

    Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert

    2008-01-01

    Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements directly, i.e. the acquired points. Gridding of the data is not necessary, a process which is inherently coupled to loss of data and precision. The 3D properties provide especially a good separability of buildings and terrain points respectively, if they are occluded by vegetation. PMID:27873771

  10. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  11. Cloud-top structure of tornadic storms on 10 April 1979 from rapid scan and stereo satellite observations

    NASA Technical Reports Server (NTRS)

    Negri, A. J.

    1982-01-01

    Stereoscopic data from near-synchronous eastern and western GOES satellite 3 min interval visible and IR measurements and ground-based radar are used to examine the Wichita Falls, TX tornado of April, 1979. The visible wavelength scan was at 0.6 micron, while the IR was at 11 microns, and additional IR blackbody temperatures were acquired from the Tiros-N spacecraft. A minimum cloud top temperature of 208 K located the point of tornadogenesis. The cloud top cooling rate was determined to be 7 K/21 min above the tropopause preceding the tornado, while a warm area at 221 K developed downwind at the same time. It was found that temperature differences of 10 K can exist between GOES and Tiros-N anvil top measurements, and reach 20 K in the case of a young thunderstorm.

  12. Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies.

    NASA Astrophysics Data System (ADS)

    Ferraz, A.; Painter, T. H.; Saatchi, S.; Bormann, K. J.

    2016-12-01

    Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies The NASA Jet Propulsion Laboratory developed the Airborne Snow Observatory (ASO), a coupled scanning lidar system and imaging spectrometer, to quantify the spatial distribution of snow volume and dynamics over mountains watersheds (Painter et al., 2015). To do this, ASO weekly over-flights mountainous areas during snowfall and snowmelt seasons. In addition, there are additional flights in snow-off conditions to calculate Digital Terrain Models (DTM). In this study, we focus on the reliability of ASO lidar data to characterize the 3D forest vegetation structure. The density of a single point cloud acquisition is of nearly 1 pt/m2, which is not optimal to properly characterize vegetation. However, ASO covers a given study site up to 14 times a year that enables computing a high-resolution point cloud by merging single acquisitions. In this study, we present a method to automatically register ASO multi-temporal lidar 3D point clouds. Although flight specifications do not change between acquisition dates, lidar datasets might have significant planimetric shifts due to inaccuracies in platform trajectory estimation introduced by the GPS system and drifts of the IMU. There are a large number of methodologies that address the problem of 3D data registration (Gressin et al., 2013). Briefly, they look for common primitive features in both datasets such as buildings corners, structures like electric poles, DTM breaklines or deformations. However, they are not suited for our experiment. First, single acquisition point clouds have low density that makes the extraction of primitive features difficult. Second, the landscape significantly changes between flights due to snowfall and snowmelt. Therefore, we developed a method to automatically register point clouds using tree apexes as keypoints because they are features that are supposed to experience little change during winter season. We applied the method to 14 lidar datasets (12 snow-on and 2 snow-off) acquired over the Tuolumne River Basin (California) in the year of 2014. To assess the reliability of the merged point cloud, we analyze the quality of vegetation related products such as canopy height models (CHM) and vertical vegetation profiles.

  13. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  14. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline can be drawn. The designed scripts are able to ensure for simple point clouds: the elimination of almost all noise points and the reconstruction of a CAD model.

  15. Classification of Dual-Wavelength Airborne Laser Scanning Point Cloud Based on the Radiometric Properties of the Objects

    NASA Astrophysics Data System (ADS)

    Pilarska, M.

    2018-05-01

    Airborne laser scanning (ALS) is a well-known and willingly used technology. One of the advantages of this technology is primarily its fast and accurate data registration. In recent years ALS is continuously developed. One of the latest achievements is multispectral ALS, which consists in obtaining simultaneously the data in more than one laser wavelength. In this article the results of the dual-wavelength ALS data classification are presented. The data were acquired with RIEGL VQ-1560i sensor, which is equipped with two laser scanners operating in different wavelengths: 532 nm and 1064 nm. Two classification approaches are presented in the article: classification, which is based on geometric relationships between points and classification, which mostly relies on the radiometric properties of registered objects. The overall accuracy of the geometric classification was 86 %, whereas for the radiometric classification it was 81 %. As a result, it can be assumed that the radiometric features which are provided by the multispectral ALS have potential to be successfully used in ALS point cloud classification.

  16. The Influence of Cloud Field Uniformity on Observed Cloud Amount

    NASA Astrophysics Data System (ADS)

    Riley, E.; Kleiss, J.; Kassianov, E.; Long, C. N.; Riihimaki, L.; Berg, L. K.

    2017-12-01

    Two ground-based measurements of cloud amount include cloud fraction (CF) obtained from time series of zenith-pointing radar-lidar observations and fractional sky cover (FSC) acquired from a Total Sky Imager (TSI). In comparison with the radars and lidars, the TSI has a considerably larger field of view (FOV 100° vs. 0.2°) and therefore is expected to have a different sensitivity to inhomogeneity in a cloud field. Radiative transfer calculations based on cloud properties retrieved from narrow-FOV overhead cloud observations may differ from shortwave and longwave flux observations due to spatial variability in local cloud cover. This bias will impede radiative closure for sampling reasons rather than the accuracy of cloud microphysics retrievals or radiative transfer calculations. Furthermore, the comparison between observed and modeled cloud amount from large eddy simulations (LES) models may be affected by cloud field inhomogeneity. The main goal of our study is to estimate the anticipated impact of cloud field inhomogeneity on the level of agreement between CF and FSC. We focus on shallow cumulus clouds observed at the U.S. Department of Energy Atmospheric Radiation Measurement Facility's Southern Great Plains (SGP) site in Oklahoma, USA. Our analysis identifies cloud field inhomogeneity using a novel metric that quantifies the spatial and temporal uniformity of FSC over 100-degree FOV TSI images. We demonstrate that (1) large differences between CF and FSC are partly attributable to increases in inhomogeneity and (2) using the uniformity metric can provide a meaningful assessment of uncertainties in observed cloud amount to aide in comparing ground-based measurements to radiative transfer or LES model outputs at SGP.

  17. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931

  18. Point Cloud Analysis for Conservation and Enhancement of Modernist Architecture

    NASA Astrophysics Data System (ADS)

    Balzani, M.; Maietti, F.; Mugayar Kühl, B.

    2017-02-01

    Documentation of cultural assets through improved acquisition processes for advanced 3D modelling is one of the main challenges to be faced in order to address, through digital representation, advanced analysis on shape, appearance and conservation condition of cultural heritage. 3D modelling can originate new avenues in the way tangible cultural heritage is studied, visualized, curated, displayed and monitored, improving key features such as analysis and visualization of material degradation and state of conservation. An applied research focused on the analysis of surface specifications and material properties by means of 3D laser scanner survey has been developed within the project of Digital Preservation of FAUUSP building, Faculdade de Arquitetura e Urbanismo da Universidade de São Paulo, Brazil. The integrated 3D survey has been performed by the DIAPReM Center of the Department of Architecture of the University of Ferrara in cooperation with the FAUUSP. The 3D survey has allowed the realization of a point cloud model of the external surfaces, as the basis to investigate in detail the formal characteristics, geometric textures and surface features. The digital geometric model was also the basis for processing the intensity values acquired by laser scanning instrument; this method of analysis was an essential integration to the macroscopic investigations in order to manage additional information related to surface characteristics displayable on the point cloud.

  19. Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm

    NASA Astrophysics Data System (ADS)

    Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.

    2017-09-01

    Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.

  20. UAS-SfM for coastal research: Geomorphic feature extraction and land cover classification from high-resolution elevation and optical imagery

    USGS Publications Warehouse

    Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel

    2017-01-01

    The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.

  1. Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans

    NASA Astrophysics Data System (ADS)

    Boussaha, M.; Vallet, B.; Rives, P.

    2018-05-01

    The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  2. Expedient Gap Definition Using 3D LADAR

    DTIC Science & Technology

    2006-09-01

    Research and Development Center (ERDC), ASI has developed an algorithm to reduce the 3D point cloud acquired with the LADAR system into sets of 2D...ATO IV.GC.2004.02. The GAP Program is conducted by the U.S. Army Engineer Research and Development Center (ERDC) in conjunction with the U.S. Army...Introduction 1 1 Introduction Background The Battlespace Gap Definition and Defeat ( GAP ) Program is conducted by the U.S. Army Engineer Research and

  3. Classification of Mls Point Clouds in Urban Scenes Using Detrended Geometric Features from Supervoxel-Based Local Contexts

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.

    2018-05-01

    In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

  4. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  5. Structure from motion, a low cost, very high resolution method for surveying glaciers using GoPros and opportunistic helicopter flights

    NASA Astrophysics Data System (ADS)

    Girod, L.; Nuth, C.; Schellenberger, T.

    2014-12-01

    The capability of structure from motion techniques to survey glaciers with a very high spatial and temporal resolution is a promising tool for better understanding the dynamic changes of glaciers. Modern software and computing power allow us to produce accurate data sets from low cost surveys, thus improving the observational capabilities on a wider range of glaciers and glacial processes. In particular, highly accurate glacier volume change monitoring and 3D movement computations will be possible Taking advantage of the helicopter flight needed to survey the ice stakes on Kronenbreen, NW Svalbard, we acquired high resolution photogrammetric data over the well-studied Midre Lovénbreen in September 2013. GoPro Hero 2 cameras were attached to the landing gear of the helicopter, acquiring two images per second. A C/A code based GPS was used for registering the stereoscopic model. Camera clock calibration is obtained through fitting together the shapes of the flight given by both the GPS logger and the relative orientation of the images. A DEM and an ortho-image are generated at 30cm resolution from 300 images collected. The comparison with a 2005 LiDAR DEM (5 meters resolution) shows an absolute error in the direct registration of about 6±3m in 3D which could be easily reduced to 1,5±1m by using fine point cloud alignment algorithms on stable ground. Due to the different nature of the acquisition method, it was not possible to use tie point based co-registration. A combination of the DEM and ortho-image is shown with the point cloud in figure below. A second photogrammetric data set will be acquired in September 2014 to survey the annual volume change and movement. These measurements will then be compared to the annual resolution glaciological stake mass balance and velocity measurements to assess the precision of the method to monitor at an annual resolution.

  6. Updating a preoperative surface model with information from real-time tracked 2D ultrasound using a Poisson surface reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.

    2014-03-01

    In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.

  7. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  8. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  9. The registration of non-cooperative moving targets laser point cloud in different view point

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.

  10. Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis

    NASA Astrophysics Data System (ADS)

    Li, Y.

    2013-05-01

    The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.

  11. Cortical Surface Registration for Image-Guided Neurosurgery Using Laser-Range Scanning

    PubMed Central

    Sinha, Tuhin K.; Cash, David M.; Galloway, Robert L.; Weil, Robert J.

    2013-01-01

    In this paper, a method of acquiring intraoperative data using a laser range scanner (LRS) is presented within the context of model-updated image-guided surgery. Registering textured point clouds generated by the LRS to tomographic data is explored using established point-based and surface techniques as well as a novel method that incorporates geometry and intensity information via mutual information (SurfaceMI). Phantom registration studies were performed to examine accuracy and robustness for each framework. In addition, an in vivo registration is performed to demonstrate feasibility of the data acquisition system in the operating room. Results indicate that SurfaceMI performed better in many cases than point-based (PBR) and iterative closest point (ICP) methods for registration of textured point clouds. Mean target registration error (TRE) for simulated deep tissue targets in a phantom were 1.0 ± 0.2, 2.0 ± 0.3, and 1.2 ± 0.3 mm for PBR, ICP, and SurfaceMI, respectively. With regard to in vivo registration, the mean TRE of vessel contour points for each framework was 1.9 ± 1.0, 0 9 ± 0.6, and 1.3 ± 0.5 for PBR, ICP, and SurfaceMI, respectively. The methods discussed in this paper in conjunction with the quantitative data provide impetus for using LRS technology within the model-updated image-guided surgery framework. PMID:12906252

  12. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    PubMed

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  13. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features

    PubMed Central

    Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-01-01

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096

  14. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x., showed the error, 3.5% and 5.0%,.respectively The reference height was assumed as the measurement performed by the tape on the cut tree. The average error of automatic determination of the tree height by the algorithm GNOM based on the TLS point clouds amounted to 6.3% and was slightly higher than when using the manual method of measurements on profiles in the TerraScan (Terrasolid; the error of 5.6%). The relatively high value of the error may be mainly related to the small number of points TLS in the upper parts of crowns. The crown height measurement showed the error of +9.5%. The reference in this case was the tape measurement performed already on the trunks of cut pine trees. Processing the clouds of points by the algorithms GNOM for 16 analyzed trees took no longer than 10 min. (37 sec. /tree). The paper mainly showed the TLS measurement innovation and its high precision in acquiring biometric data in forestry, and at the same time also the further need to increase the degree of automation of processing the clouds of points 3D from terrestrial laser scanning.

  15. Venus in motion: An animated video catalog of Pioneer Venus Orbiter Cloud Photopolarimeter images

    NASA Technical Reports Server (NTRS)

    Limaye, Sanjay S.

    1992-01-01

    Images of Venus acquired by the Pioneer Venus Orbiter Cloud Photopolarimeter (OCPP) during the 1982 opportunity have been utilized to create a short video summary of the data. The raw roll by roll images were first navigated using the spacecraft attitude and orbit information along with the CPP instrument pointing information. The limb darkening introduced by the variation of solar illumination geometry and the viewing angle was then modelled and removed. The images were then projected to simulate a view obtained from a fixed perspective with the observer at 10 Venus radii away and located above a Venus latitude of 30 degrees south and a longitude 60 degrees west. A total of 156 images from the 1982 opportunity have been animated at different dwell rates.

  16. A cost-effective laser scanning method for mapping stream channel geometry and roughness

    NASA Astrophysics Data System (ADS)

    Lam, Norris; Nathanson, Marcus; Lundgren, Niclas; Rehnström, Robin; Lyon, Steve

    2015-04-01

    In this pilot project, we combine an Arduino Uno and SICK LMS111 outdoor laser ranging camera to acquire high resolution topographic area scans for a stream channel. The microprocessor and imaging system was installed in a custom gondola and suspended from a wire cable system. To demonstrate the systems capabilities for capturing stream channel topography, a small stream (< 2m wide) in the Krycklan Catchment Study was temporarily diverted and scanned. Area scans along the stream channel resulted in a point spacing of 4mm and a point cloud density of 5600 points/m2 for the 5m by 2m area. A grain size distribution of the streambed material was extracted from the point cloud using a moving window, local maxima search algorithm. The median, 84th and 90th percentiles (common metrics to describe channel roughness) of this distribution were found to be within the range of measured values while the largest modelled element was approximately 35% smaller than its measured counterpart. The laser scanning system captured grain sizes between 30mm and 255mm (coarse gravel/pebbles and boulders based on the Wentworth (1922) scale). This demonstrates that our system was capable of resolving both large-scale geometry (e.g. bed slope and stream channel width) and small-scale channel roughness elements (e.g. coarse gravel/pebbles and boulders) for the study area. We further show that the point cloud resolution is suitable for estimating ecohydraulic parameters such as Manning's n and hydraulic radius. Although more work is needed to fine-tune our system's design, these preliminary results are encouraging, specifically for those with a limited operational budget.

  17. Individual Rocks Segmentation in Terrestrial Laser Scanning Point Cloud Using Iterative Dbscan Algorithm

    NASA Astrophysics Data System (ADS)

    Walicka, A.; Jóźków, G.; Borkowski, A.

    2018-05-01

    The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.

  18. Atmospheric movies acquired at the Mars Science Laboratory landing site: Cloud morphology, frequency and significance to the Gale Crater water cycle and Phoenix mission results

    NASA Astrophysics Data System (ADS)

    Moores, John E.; Lemmon, Mark T.; Rafkin, Scot C. R.; Francis, Raymond; Pla-Garcia, Jorge; de la Torre Juárez, Manuel; Bean, Keri; Kass, David; Haberle, Robert; Newman, Claire; Mischna, Michael; Vasavada, Ashwin; Rennó, Nilton; Bell, Jim; Calef, Fred; Cantor, Bruce; Mcconnochie, Timothy H.; Harri, Ari-Matti; Genzer, Maria; Wong, Michael; Smith, Michael D.; Javier Martín-Torres, F.; Zorzano, María-Paz; Kemppinen, Osku; McCullough, Emily

    2015-05-01

    We report on the first 360 sols (LS 150° to 5°), representing just over half a Martian year, of atmospheric monitoring movies acquired using the NavCam imager from the Mars Science Laboratory (MSL) Rover Curiosity. Such movies reveal faint clouds that are difficult to discern in single images. The data set acquired was divided into two different classifications depending upon the orientation and intent of the observation. Up to sol 360, 73 Zenith movies and 79 Supra-Horizon movies have been acquired and time-variable features could be discerned in 25 of each. The data set from MSL is compared to similar observations made by the Surface Stereo Imager (SSI) onboard the Phoenix Lander and suggests a much drier environment at Gale Crater (4.6°S) during this season than was observed in Green Valley (68.2°N) as would be expected based on latitude and the global water cycle. The optical depth of the variable component of clouds seen in images with features are up to 0.047 ± 0.009 with a granularity to the features observed which averages 3.8°. MCS also observes clouds during the same period of comparable optical depth at 30 and 50 km that would suggest a cloud spacing of 2.0 to 3.3 km. Multiple motions visible in atmospheric movies support the presence of two distinct layers of clouds. At Gale Crater, these clouds are likely caused by atmospheric waves given the regular spacing of features observed in many Zenith movies and decreased spacing towards the horizon in sunset movies consistent with clouds forming at a constant elevation. Reanalysis of Phoenix data in the light of the NavCam equatorial dataset suggests that clouds may have been more frequent in the earlier portion of the Phoenix mission than was previously thought.

  19. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  20. Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Zuo, C.; Xiao, X.; Hou, Q.; Li, B.

    2018-05-01

    WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.

  1. 3D Reconstruction of Static Human Body with a Digital Camera

    NASA Astrophysics Data System (ADS)

    Remondino, Fabio

    2003-01-01

    Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.

  2. The Hyper-Angular Rainbow Polarimeter (HARP) CubeSat Observatory and the Characterization of Cloud Properties

    NASA Astrophysics Data System (ADS)

    Neilsen, T. L.; Martins, J. V.; Fernandez Borda, R. A.; Weston, C.; Frazier, C.; Cieslak, D.; Townsend, K.

    2015-12-01

    The Hyper-Angular Rainbow Polarimeter HARP instrument is a wide field-of-view imager that splits three spatially identical images into three independent polarizers and detector arrays.This technique achieves simultaneous imagery of the same ground target in three polarization states and is the key innovation to achieve high polarimetric accuracy with no moving parts. The spacecraft consists of a 3U CubeSat with 3-axis stabilization designed to keep the image optics pointing nadir during data collection but maximizing solar panel sun pointing otherwise. The hyper-angular capability is achieved by acquiring overlapping images at very fast speeds.An imaging polarimeter with hyper-angular capability can make a strong contribution to characterizing cloud properties. Non-polarized multi-angle measurements have been shown to besensitive to thin cirrus and can be used to provide climatology ofthese clouds. Adding polarization and increasing the number ofobservation angles allows for the retrieval of the complete sizedistribution of cloud droplets, including accurate information onthe width of the droplet distribution in addition to the currentlyretrieved effective radius.The HARP mission is funded by the NASA Earth Science Technology Office as part of In-Space Validation of Earth Science Technologies (InVEST) program. The HARP instrument is designed and built by a team of students and professionals lead by Dr. Vanderlei Martines at University of Maryland, Baltimore County. The HARP spacecraft is designed and built by a team of students and professionals and The Space Dynamics Laboratory.

  3. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  4. Integrated system for point cloud reconstruction and simulated brain shift validation using tracked surgical microscope

    NASA Astrophysics Data System (ADS)

    Yang, Xiaochen; Clements, Logan W.; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.

    2017-03-01

    Intra-operative soft tissue deformation, referred to as brain shift, compromises the application of current imageguided surgery (IGS) navigation systems in neurosurgery. A computational model driven by sparse data has been used as a cost effective method to compensate for cortical surface and volumetric displacements. Stereoscopic microscopes and laser range scanners (LRS) are the two most investigated sparse intra-operative imaging modalities for driving these systems. However, integrating these devices in the clinical workflow to facilitate development and evaluation requires developing systems that easily permit data acquisition and processing. In this work we present a mock environment developed to acquire stereo images from a tracked operating microscope and to reconstruct 3D point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space in order to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. Our experimental results report approximately 2mm average displacement error compared with the optical tracking system. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to LRS to collect sufficient intraoperative information for brain shift correction.

  5. Accuracy analysis of point cloud modeling for evaluating concrete specimens

    NASA Astrophysics Data System (ADS)

    D'Amico, Nicolas; Yu, Tzuyang

    2017-04-01

    Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.

  6. Multi-temporal monitoring of a regional riparian buffer network (>12,000 km) with LiDAR and photogrammetric point clouds.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lejeune, Philippe; Claessens, Hugues

    2017-11-01

    Riparian buffers are of major concern for land and water resource managers despite their relatively low spatial coverage. In Europe, this concern has been acknowledged by different environmental directives which recommend multi-scale monitoring (from local to regional scales). Remote sensing methods could be a cost-effective alternative to field-based monitoring, to build replicable "wall-to-wall" monitoring strategies of large river networks and associated riparian buffers. The main goal of our study is to extract and analyze various parameters of the riparian buffers of up to 12,000 km of river in southern Belgium (Wallonia) from three-dimensional (3D) point clouds based on LiDAR and photogrammetric surveys to i) map riparian buffers parameters on different scales, ii) interpret the regional patterns of the riparian buffers and iii) propose new riparian buffer management indicators. We propose different strategies to synthesize and visualize relevant information at different spatial scales ranging from local (<10 km) to regional scale (>12,000 km). Our results showed that the selected parameters had a clear regional pattern. The reaches of Ardenne ecoregion have channels with the highest flow widths and shallowest depths. In contrast, the reaches of the Loam ecoregion have the narrowest and deepest flow channels. Regional variability in channel width and depth is used to locate management units potentially affected by human impact. Riparian forest of the Loam ecoregion is characterized by the lowest longitudinal continuity and mean tree height, underlining significant human disturbance. As the availability of 3D point clouds at the regional scale is constantly growing, our study proposes reproducible methods which can be integrated into regional monitoring by land managers. With LiDAR still being relatively expensive to acquire, the use of photogrammetric point clouds combined with LiDAR data is a cost-effective means to update the characterization of the riparian forest conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.

    2012-07-01

    The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.

  8. Use of Fisheye Parrot Bebop 2 Images for 3d Modelling Using Commercial Photogrammetric Software

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Pinto, L.

    2018-05-01

    Fisheye camera installed on-board mass market UAS are becoming very popular and it is more and more frequent the use of such platforms for photogrammetric purposes. The interest of wide-angles images for 3D modelling is confirmed by the introduction of fisheye models in several commercial software packages. The paper exploits the different mathematical models implemented in the most famous commercial photogrammetric software packages, highlighting the different processing pipelines and analysing the achievable results in terms of checkpoint residuals, as well as the quality of the delivered 3D point clouds. A two-step approach based on the creation of undistorted images has been tested too. An experimental test has been carried out using a Parrot Bebop 2 UAS by performing a flight over an historical complex located near Piacenza (Northern Italy), which is characterized by the simultaneous presence of horizontal, vertical and oblique surfaces. Different flight configurations have been tested to evaluate the potentiality and possible drawbacks of the previously mentioned UAS platform. Results confirmed that the fisheye images acquired with the Parrot Bebop 2 are suitable for 3D modelling, ensuring accuracies of the photogrammetric blocks of the order of the GSD (about 0.05 m normal to the optic axis in case of a flight height equal to 35 m). The generated point clouds have been compared to a reference scan, acquired by means of a MS60 MultiStation, resulting in differences below 0.05 in all directions.

  9. Photogrammetric analysis of concrete specimens and structures for condition assessment

    NASA Astrophysics Data System (ADS)

    D'Amico, Nicolas; Yu, Tzuyang

    2016-04-01

    Deterioration of civil infrastructure in America demands routine inspection and maintenance to avoid catastrophic failures from occurring. Among many other non-destructive evaluations (NDE), photogrammetry is an accessible and realistic approach used for non-destructive evaluation (NDE) of a civil infrastructure systems. The objective of this paper is to explore the capabilities of photogrammetry for locating, sizing, and analyzing the remaining capacity of a specimen or system using point cloud data. Geometric interpretations, composed from up to 70 photographs are analyzed as a mesh or point cloud models. In this case study, concrete, which exhibits a large amount of surface texture features, was thoroughly examined. These evaluative techniques discussed were applied to concrete cylinder models as well as portions of civil infrastructure including buildings, retaining walls, and bridge abutments. In this paper, the aim is to demonstrate the basic analytical functionality of photogrammetry, as well as its applicability to in-situ civil infrastructure systems. In concrete specimens defect length and location can be evaluated in a fully defined model (one with the maximum amount of correctly acquired photographs) with less than 2% error. Error was found to be inversely proportional to the number of acceptable photographs acquired, remaining significantly under 10% error for any model with enough data to render. Furthermore, volumetric stress evaluations were applied using a cross sectional evaluation technique to locate the critical area, and determine the severity of damages. Finally, findings and the accuracy of the results are discussed.

  10. Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.

    2017-08-01

    The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.

  11. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  12. Multi-layer Clouds Over the South Indian Ocean

    NASA Image and Video Library

    2003-05-07

    The complex structure and beauty of polar clouds are highlighted by these images acquired by NASA Terra spacecraft on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean,

  13. Seasonal Change in Titan's Cloud Activity

    NASA Astrophysics Data System (ADS)

    Schaller, E. L.; Brown, M. E.; Roe, H. G.

    2006-12-01

    We have acquired whole disk spectra of Titan on nineteen nights with IRTF/SpeX over a three-month period in the spring of 2006 and will acquire data on ~50 additional nights between September and December 2006. The data encompass the spectral range of 0.8 to 2.4 microns at a resolution of 375. These disk- integrated spectra allow us to determine Titan's total fractional cloud coverage and altitudes of clouds present. We find that Titan had less than 0.15% fractional cloud coverage on all but one of the nineteen nights. The near lack of cloud activity in these spectra is in sharp contrast to nearly every spectrum taken from 1995-1999 with UKIRT by Griffith et al. (1998 &2000) who found rapidly varying clouds covering ~0.5% of Titan's disk. The differences in these two similar datasets indicate a striking seasonal change in the behavior of Titan's clouds. Observations of the latitudes, magnitudes, altitudes, and frequencies of Titan's clouds as Titan moves toward southern autumnal equinox in 2009 will help elucidate when and how Titan's methane hydrological cycle changes with season.

  14. LSAH: a fast and efficient local surface feature for point cloud registration

    NASA Astrophysics Data System (ADS)

    Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi

    2018-04-01

    Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.

  15. Multi-layer Clouds Over the South Indian Ocean

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The complex structure and beauty of polar clouds are highlighted by these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean, to the north of Enderbyland, East Antarctica.

    The image at left was created by overlying a natural-color view from MISR's downward-pointing (nadir) camera with a color-coded stereo height field. MISR retrieves heights by a pattern recognition algorithm that utilizes multiple view angles to derive cloud height and motion. The opacity of the height field was then reduced until the field appears as a translucent wash over the natural-color image. The resulting purple, cyan and green hues of this aesthetic display indicate low, medium or high altitudes, respectively, with heights ranging from less than 2 kilometers (purple) to about 8 kilometers (green). In the lower right corner, the edge of the Antarctic coastline and some sea ice can be seen through some thin, high cirrus clouds.

    The right-hand panel is a natural-color image from MISR's 70-degree backward viewing camera. This camera looks backwards along the path of Terra's flight, and in the southern hemisphere the Sun is in front of this camera. This perspective causes the cloud-tops to be brightly outlined by the sun behind them, and enhances the shadows cast by clouds with significant vertical structure. An oblique observation angle also enhances the reflection of light by atmospheric particles, and accentuates the appearance of polar clouds. The dark ocean and sea ice that were apparent through the cirrus clouds at the bottom right corner of the nadir image are overwhelmed by the brightness of these clouds at the oblique view.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17794. The panels cover an area of 335 kilometers x 605 kilometers, and utilize data from blocks 142 to 145 within World Reference System-2 path 155.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  16. Terrestrial scanning or digital images in inventory of monumental objects? - case study

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Zawieska, D.

    2014-06-01

    Cultural heritage is the evidence of the past; monumental objects create the important part of the cultural heritage. Selection of a method to be applied depends on many factors, which include: the objectives of inventory, the object's volume, sumptuousness of architectural design, accessibility to the object, required terms and accuracy of works. The paper presents research and experimental works, which have been performed in the course of development of architectural documentation of elements of the external facades and interiors of the Wilanów Palace Museum in Warszawa. Point clouds, acquired from terrestrial laser scanning (Z+F 5003h) and digital images taken with Nikon D3X and Hasselblad H4D cameras were used. Advantages and disadvantages of utilisation of these technologies of measurements have been analysed with consideration of the influence of the structure and reflectance of investigated monumental surfaces on the quality of generation of photogrammetric products. The geometric quality of surfaces obtained from terrestrial laser scanning data and from point clouds resulting from digital images, have been compared.

  17. On-field mounting position estimation of a lidar sensor

    NASA Astrophysics Data System (ADS)

    Khan, Owes; Bergelt, René; Hardt, Wolfram

    2017-10-01

    In order to retrieve a highly accurate view of their environment, autonomous cars are often equipped with LiDAR sensors. These sensors deliver a three dimensional point cloud in their own co-ordinate frame, where the origin is the sensor itself. However, the common co-ordinate system required by HAD (Highly Autonomous Driving) software systems has its origin at the center of the vehicle's rear axle. Thus, a transformation of the acquired point clouds to car co-ordinates is necessary, and thereby the determination of the exact mounting position of the LiDAR system in car coordinates is required. Unfortunately, directly measuring this position is a time-consuming and error-prone task. Therefore, different approaches have been suggested for its estimation which mostly require an exhaustive test-setup and are again time-consuming to prepare. When preparing a high number of LiDAR mounted test vehicles for data acquisition, most approaches fall short due to time or money constraints. In this paper we propose an approach for mounting position estimation which features an easy execution and setup, thus making it feasible for on-field calibration.

  18. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  19. Pedestrian Detection by Laser Scanning and Depth Imagery

    NASA Astrophysics Data System (ADS)

    Barsi, A.; Lovas, T.; Molnar, B.; Somogyi, A.; Igazvolgyi, Z.

    2016-06-01

    Pedestrian flow is much less regulated and controlled compared to vehicle traffic. Estimating flow parameters would support many safety, security or commercial applications. Current paper discusses a method that enables acquiring information on pedestrian movements without disturbing and changing their motion. Profile laser scanner and depth camera have been applied to capture the geometry of the moving people as time series. Procedures have been developed to derive complex flow parameters, such as count, volume, walking direction and velocity from laser scanned point clouds. Since no images are captured from the faces of pedestrians, no privacy issues raised. The paper includes accuracy analysis of the estimated parameters based on video footage as reference. Due to the dense point clouds, detailed geometry analysis has been conducted to obtain the height and shoulder width of pedestrians and to detect whether luggage has been carried or not. The derived parameters support safety (e.g. detecting critical pedestrian density in mass events), security (e.g. detecting prohibited baggage in endangered areas) and commercial applications (e.g. counting pedestrians at all entrances/exits of a shopping mall).

  20. 3D Reconstruction of Irregular Buildings and Buddha Statues

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Li, M.-j.

    2014-04-01

    Three-dimensional laser scanning could acquire object's surface data quickly and accurately. However, the post-processing of point cloud is not perfect and could be improved. Based on the study of 3D laser scanning technology, this paper describes the details of solutions to modelling irregular ancient buildings and Buddha statues in Jinshan Temple, which aiming at data acquisition, modelling and texture mapping, etc. In order to modelling irregular ancient buildings effectively, the structure of each building is extracted manually by point cloud and the textures are mapped by the software of 3ds Max. The methods clearly combine 3D laser scanning technology with traditional modelling methods, and greatly improves the efficiency and accuracy of the ancient buildings restored. On the other hand, the main idea of modelling statues is regarded as modelling objects in reverse engineering. The digital model of statues obtained is not just vivid, but also accurate in the field of surveying and mapping. On this basis, a 3D scene of Jinshan Temple is reconstructed, which proves the validity of the solutions.

  1. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  2. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  3. Lidar-based door and stair detection from a mobile robot

    NASA Astrophysics Data System (ADS)

    Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet

    2010-04-01

    We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

  4. Reconstruction of 3d Objects of Assets and Facilities by Using Benchmark Points

    NASA Astrophysics Data System (ADS)

    Baig, S. U.; Rahman, A. A.

    2013-08-01

    Acquiring and modeling 3D geo-data of building assets and facility objects is one of the challenges. A number of methods and technologies are being utilized for this purpose. Total station, GPS, photogrammetric and terrestrial laser scanning are few of these technologies. In this paper, points commonly shared by potential facades of assets and facilities modeled from point clouds are identified. These points are useful for modeling process to reconstruct 3D models of assets and facilities stored to be used for management purposes. These models are segmented through different planes to produce accurate 2D plans. This novel method improves the efficiency and quality of construction of models of assets and facilities with the aim utilize in 3D management projects such as maintenance of buildings or group of items that need to be replaced, or renovated for new services.

  5. WindCam and MSPI: two cloud and aerosol instrument concepts derived from Terra/MISR heritage

    NASA Astrophysics Data System (ADS)

    Diner, David J.; Mischna, Michael; Chipman, Russell A.; Davis, Ab; Cairns, Brian; Davies, Roger; Kahn, Ralph A.; Muller, Jan-Peter; Torres, Omar

    2008-08-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has been acquiring global cloud and aerosol data from polar orbit since February 2000. MISR acquires moderately high-resolution imagery at nine view angles from nadir to 70.5°, in four visible/near-infrared spectral bands. Stereoscopic parallax, time lapse among the nine views, and the variation of radiance with angle and wavelength enable retrieval of geometric cloud and aerosol plume heights, height-resolved cloud-tracked winds, and aerosol optical depth and particle property information. Two instrument concepts based upon MISR heritage are in development. The Cloud Motion Vector Camera, or WindCam, is a simplified version comprised of a lightweight, compact, wide-angle camera to acquire multiangle stereo imagery at a single visible wavelength. A constellation of three WindCam instruments in polar Earth orbit would obtain height-resolved cloud-motion winds with daily global coverage, making it a low-cost complement to a spaceborne lidar wind measurement system. The Multiangle SpectroPolarimetric Imager (MSPI) is aimed at aerosol and cloud microphysical properties, and is a candidate for the National Research Council Decadal Survey's Aerosol-Cloud-Ecosystem (ACE) mission. MSPI combines the capabilities of MISR with those of other aerosol sensors, extending the spectral coverage to the ultraviolet and shortwave infrared and incorporating high-accuracy polarimetric imaging. Based on requirements for the nonimaging Aerosol Polarimeter Sensor on NASA's Glory mission, a degree of linear polarization uncertainty of 0.5% is specified within a subset of the MSPI bands. We are developing a polarization imaging approach using photoelastic modulators (PEMs) to accomplish this objective.

  6. 77 FR 68122 - Formations of, Acquisitions by, and Mergers of Savings and Loan Holding Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-15

    ..., Minneapolis, Minnesota 55480-0291: 1. The Miller Family 2012 Trust U/A Dated December 21, 2012, St. Cloud... Services of Saint Cloud, Inc., Saint Cloud, MN, and thereby indirectly acquire control of Liberty Savings Bank, FSB, Saint Cloud, Minnesota. Board of Governors of the Federal Reserve System, November 9, 2012...

  7. Integration of Point Clouds and Images Acquired from a Low-Cost NIR Camera Sensor for Cultural Heritage Purposes

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Wojtkowska, M.; Fryskowska, A.

    2017-08-01

    Terrestrial Laser Scanning is currently one of the most common techniques for modelling and documenting structures of cultural heritage. However, only geometric information on its own, without the addition of imagery data is insufficient when formulating a precise statement about the status of studies structure, for feature extraction or indicating the sites to be restored. Therefore, the Authors propose the integration of spatial data from terrestrial laser scanning with imaging data from low-cost cameras. The use of images from low-cost cameras makes it possible to limit the costs needed to complete such a study, and thus, increasing the possibility of intensifying the frequency of photographing and monitoring of the given structure. As a result, the analysed cultural heritage structures can be monitored more closely and in more detail, meaning that the technical documentation concerning this structure is also more precise. To supplement the laser scanning information, the Authors propose using both images taken both in the near-infrared range and in the visible range. This choice is motivated by the fact that not all important features of historical structures are always visible RGB, but they can be identified in NIR imagery, which, with the additional merging with a three-dimensional point cloud, gives full spatial information about the cultural heritage structure in question. The Authors proposed an algorithm that automates the process of integrating NIR images with a point cloud using parameters, which had been calculated during the transformation of RGB images. A number of conditions affecting the accuracy of the texturing had been studies, in particular, the impact of the geometry of the distribution of adjustment points and their amount on the accuracy of the integration process, the correlation between the intensity value and the error on specific points using images in different ranges of the electromagnetic spectrum and the selection of the optimal method of transforming the acquired imagery. As a result of the research, an innovative solution was achieved, giving high accuracy results and taking into account a number of factors important in the creation of the documentation of historical structures. In addition, thanks to the designed algorithm, the final result can be obtained in a very short time at a high level of automation, in relation to similar types of studies, meaning that it would be possible to obtain a significant data set for further analyses and more detailed monitoring of the state of the historical structures.

  8. The Hyper-Angular Rainbow Polarimeter (HARP) CubeSat Observatory and the Characterization of Cloud Properties

    NASA Astrophysics Data System (ADS)

    Neilsen, T. L.; Martins, J. V.; Fish, C. S.; Fernandez Borda, R. A.

    2014-12-01

    The Hyper-Angular Rainbow Polarimeter HARP instrument is a wide field-of-view imager that splits three spatially identical images into three independent polarizers and detector arrays. This technique achieves simultaneous imagery of the same ground target in three polarization states and is the key innovation to achieve high polarimetric accuracy with no moving parts. The spacecraft consists of a 3U CubeSat with 3-axis stabilization designed to keep the image optics pointing nadir during data collection but maximizing solar panel sun pointing otherwise. The hyper-angular capability is achieved by acquiring overlapping images at very fast speeds. An imaging polarimeter with hyper-angular capability can make a strong contribution to characterizing cloud properties. Non-polarized multi-angle measurements have been shown to be sensitive to thin cirrus and can be used to provide climatology of these clouds. Adding polarization and increasing the number of observation angles allows for the retrieval of the complete size distribution of cloud droplets, including accurate information on the width of the droplet distribution in addition to the currently retrieved e­ffective radius. The HARP mission is funded by the NASA Earth Science Technology Office as part of In-Space Validation of Earth Science Technologies (InVEST) program. The HARP instrument is designed and built by a team of students and professionals lead by Dr. Vanderlei Martines at University of Maryland, Baltimore County. The HARP spacecraft is designed and built by a team of students and professionals and The Space Dynamics Laboratory.

  9. a Comparison Between Active and Passive Techniques for Underwater 3d Applications

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M.

    2011-09-01

    In the field of 3D scanning, there is an increasing need for more accurate technologies to acquire 3D models of close range objects. Underwater exploration, for example, is very hard to perform due to the hostile conditions and the bad visibility of the environment. Some application fields, like underwater archaeology, require to recover tridimensional data of objects that cannot be moved from their site or touched in order to avoid possible damages. Photogrammetry is widely used for underwater 3D acquisition, because it requires just one or two digital still or video cameras to acquire a sequence of images taken from different viewpoints. Stereo systems composed by a pair of cameras are often employed on underwater robots (i.e. ROVs, Remotely Operated Vehicles) and used by scuba divers, in order to survey archaeological sites, reconstruct complex 3D structures in aquatic environment, estimate in situ the length of marine organisms, etc. The stereo 3D reconstruction is based on the triangulation of corresponding points on the two views. This requires to find in both images common points and to match them (correspondence problem), determining a plane that contains the 3D point on the object. Another 3D technique, frequently used in air acquisition, solves this point-matching problem by projecting structured lighting patterns to codify the acquired scene. The corresponding points are identified associating a binary code in both images. In this work we have tested and compared two whole-field 3D imaging techniques (active and passive) based on stereo vision, in underwater environment. A 3D system has been designed, composed by a digital projector and two still cameras mounted in waterproof housing, so that it can perform the various acquisitions without changing the configuration of optical devices. The tests were conducted in a water tank in different turbidity conditions, on objects with different surface properties. In order to simulate a typical seafloor, we used various concentrations of clay. The performances of the two techniques are described and discussed. In particular, the point clouds obtained are compared in terms of number of acquired 3D points and geometrical deviation.

  10. Possibilities for Using LIDAR and Photogrammetric Data Obtained with AN Unmanned Aerial Vehicle for Levee Monitoring

    NASA Astrophysics Data System (ADS)

    Bakuła, K.; Ostrowski, W.; Szender, M.; Plutecki, W.; Salach, A.; Górski, K.

    2016-06-01

    This paper presents the possibilities for using an unmanned aerial system for evaluation of the condition of levees. The unmanned aerial system is equipped with two types of sensor. One is an ultra-light laser scanner, integrated with a GNSS receiver and an INS system; the other sensor is a digital camera that acquires data with stereoscopic coverage. Sensors have been mounted on the multirotor, unmanned platform the Hawk Moth, constructed by MSP company. LiDAR data and images of levees the length of several hundred metres were acquired during testing of the platform. Flights were performed in several variants. Control points measured with the use of the GNSS technique were considered as reference data. The obtained results are presented in this paper; the methodology of processing the acquired LiDAR data, which increase in accuracy when low accuracy of the navigation systems occurs as a result of systematic errors, is also discussed. The Iterative Closest Point (ICP) algorithm, as well as measurements of control points, were used to georeference the LiDAR data. Final accuracy in the order of centimetres was obtained for generation of the digital terrain model. The final products of the proposed UAV data processing are digital elevation models, an orthophotomap and colour point clouds. The authors conclude that such a platform offers wide possibilities for low-budget flights to deliver the data, which may compete with typical direct surveying measurements performed during monitoring of such objects. However, the biggest advantage is the density and continuity of data, which allows for detection of changes in objects being monitored.

  11. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    NASA Astrophysics Data System (ADS)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  12. Aerial Images and Convolutional Neural Network for Cotton Bloom Detection.

    PubMed

    Xu, Rui; Li, Changying; Paterson, Andrew H; Jiang, Yu; Sun, Shangpeng; Robertson, Jon S

    2017-01-01

    Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of -4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.

  13. Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.

    2018-04-01

    Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  14. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  15. Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface

    NASA Astrophysics Data System (ADS)

    Brocks, S.; Bareth, G.

    2016-06-01

    Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.

  16. Jovian Antarctica.

    NASA Image and Video Library

    2017-02-04

    Cyclones swirl around the south pole, and white oval storms can be seen near the limb -- the apparent edge of the planet -- in this image of Jupiter's south polar region taken by the JunoCam imager aboard NASA's Juno spacecraft. The image was acquired on February 2, 2017, at 5:52 a.m. PST (8:52 a.m. EST) from an altitude of 47,600 miles (76,600 kilometers) above Jupiter's swirling cloud deck. Prior to the Feb. 2 flyby, the public was invited to vote for their favorite points of interest in the Jovian atmosphere for JunoCam to image. The point of interest captured here was titled "Jovian Antarctica" by a member of the public, in reference to Earth's Antarctica. http://photojournal.jpl.nasa.gov/catalog/PIA21380

  17. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination

    PubMed Central

    Fasano, Giancarmine; Grassi, Michele

    2017-01-01

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651

  18. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2017-09-24

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.

  19. Effective Detection of Sub-Surface Archeological Features from Laser Scanning Point Clouds and Imagery Data

    NASA Astrophysics Data System (ADS)

    Fryskowska, A.; Kedzierski, M.; Walczykowski, P.; Wierzbicki, D.; Delis, P.; Lada, A.

    2017-08-01

    The archaeological heritage is non-renewable, and any invasive research or other actions leading to the intervention of mechanical or chemical into the ground lead to the destruction of the archaeological site in whole or in part. For this reason, modern archeology is looking for alternative methods of non-destructive and non-invasive methods of new objects identification. The concept of aerial archeology is relation between the presence of the archaeological site in the particular localization, and the phenomena that in the same place can be observed on the terrain surface form airborne platform. One of the most appreciated, moreover, extremely precise, methods of such measurements is airborne laser scanning. In research airborne laser scanning point cloud with a density of 5 points/sq. m was used. Additionally unmanned aerial vehicle imagery data was acquired. Test area is located in central Europe. The preliminary verification of potentially microstructures localization was the creation of digital terrain and surface models. These models gave an information about the differences in elevation, as well as regular shapes and sizes that can be related to the former settlement/sub-surface feature. The paper presents the results of the detection of potentially sub-surface microstructure fields in the forestry area.

  20. Ground volume assessment using 'Structure from Motion' photogrammetry with a smartphone and a compact camera

    NASA Astrophysics Data System (ADS)

    Wróżyński, Rafał; Pyszny, Krzysztof; Sojka, Mariusz; Przybyła, Czesław; Murat-Błażejewska, Sadżide

    2017-06-01

    The article describes how the Structure-from-Motion (SfM) method can be used to calculate the volume of anthropogenic microtopography. In the proposed workflow, data is obtained using mass-market devices such as a compact camera (Canon G9) and a smartphone (iPhone5). The volume is computed using free open source software (VisualSFMv0.5.23, CMPMVSv0.6.0., MeshLab) on a PCclass computer. The input data is acquired from video frames. To verify the method laboratory tests on the embankment of a known volume has been carried out. Models of the test embankment were built using two independent measurements made with those two devices. No significant differences were found between the models in a comparative analysis. The volumes of the models differed from the actual volume just by 0.7‰ and 2‰. After a successful laboratory verification, field measurements were carried out in the same way. While building the model from the data acquired with a smartphone, it was observed that a series of frames, approximately 14% of all the frames, was rejected. The missing frames caused the point cloud to be less dense in the place where they had been rejected. This affected the model's volume differed from the volume acquired with a camera by 7%. In order to improve the homogeneity, the frame extraction frequency was increased in the place where frames have been previously missing. A uniform model was thereby obtained with point cloud density evenly distributed. There was a 1.5% difference between the embankment's volume and the volume calculated from the camera-recorded video. The presented method permits the number of input frames to be increased and the model's accuracy to be enhanced without making an additional measurement, which may not be possible in the case of temporary features.

  1. Cloud Detection from Satellite Imagery: A Comparison of Expert-Generated and Automatically-Generated Decision Trees

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar

    2004-01-01

    Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.

  2. LiDAR Point Cloud and Stereo Image Point Cloud Fusion

    DTIC Science & Technology

    2013-09-01

    LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as

  3. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  4. Developing Remote Sensing Capabilities for Meter-Scale Sea Ice Properties

    DTIC Science & Technology

    2013-09-30

    such as MODIS . APPROACH 1. Task and acquire high resolution panchromatic and multispectral optical (e.g. Quickbird, Worldview, National Assets...does not display a currently valid OMB control number. 1. REPORT DATE 30 SEP 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00-2013 4...cloud cover , an excessive percentage of the imagery acquired over drifting sites was cloud covered , and the vendor did not delay acquisitions or

  5. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  6. Overview of the ICESat Mission and Results

    NASA Astrophysics Data System (ADS)

    Zwally, H.

    2004-12-01

    NASA's Ice, Cloud, and Land Elevation Satellite (ICESat), launched in January, 2003, has been measuring surface elevations of ice and land, vertical distributions of clouds and aerosols, vegetation-canopy heights, and other features with unprecedented accuracy and sensitivity. The ICESat mission, which was designed to operate continuously for 3 to 5 years, has so far acquired science data during five periods of laser operation ranging from 33 to 54 days each. The primary purpose of ICESat has been to acquire time-series of ice-sheet elevation changes for determination of the present-day mass balance of the ice sheets, study of associations between observed ice changes and polar climate, and improve estimates of the present and future contributions to global sea level rise. ICEsat's atmospheric measurements are providing fundamentally new information on the precise vertical structure of clouds and aerosols. In particular, cloud heights are important for understanding radiation balance and their effects on climate change. Other applications include mapping of polar sea-ice freeboard and thickness, high-resolution mapping of ocean eddies, glacier topography, and lake and river levels. ICESat has a 1064 nm laser channel for near-surface altimetry with a designed range precision of 10 cm that is actually 2 cm on-orbit. Vertical distributions of clouds and aerosols are obtained with 75 m resolution from both the 1064 nm channel and the more sensitive 532 nm channel. The laser footprints are about 70 m spaced at 170 m along-track. The accuracy of the satellite-orbital heights is about 3 cm. The star-tracking attitude-determination system should enable footprints to be located to 6 m horizontally when attitude calibration is completed. The spacecraft attitude is controlled to point the laser beam to within 100 m (35 m goal) of reference surface tracks at high latitudes and to point off-nadir up to 5 degrees to targets of interest. The remaining laser lifetime will be used for approximately 33-day periods at 3 to 6 month-intervals to optimize the science return. The first ICESat was intended to be followed by successive missions to measure changes over 15 years, and has clearly proven the unique capability of laser measurements to meet multi-disciplinary science objectives. An example of continuing requirements is: "Continued observations with satellite altimeters, including . the laser altimeter on ICESat . should be continued for at least 15 years . to establish the climate sensitivities of the ice mass balance and decadal-scale trends" (Climate Change 2001, IPCC, 2001).

  7. Clouds over the summertime Sahara: an evaluation of Met Office retrievals from Meteosat Second Generation using airborne remote sensing

    NASA Astrophysics Data System (ADS)

    Kealy, John C.; Marenco, Franco; Marsham, John H.; Garcia-Carreras, Luis; Francis, Pete N.; Cooke, Michael C.; Hocking, James

    2017-05-01

    Novel methods of cloud detection are applied to airborne remote sensing observations from the unique Fennec aircraft dataset, to evaluate the Met Office-derived products on cloud properties over the Sahara based on the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) satellite. Two cloud mask configurations are considered, as well as the retrievals of cloud-top height (CTH), and these products are compared to airborne cloud remote sensing products acquired during the Fennec campaign in June 2011 and June 2012. Most detected clouds (67 % of the total) have a horizontal extent that is smaller than a SEVIRI pixel (3 km × 3 km). We show that, when partially cloud-contaminated pixels are included, a match between the SEVIRI and aircraft datasets is found in 80 ± 8 % of the pixels. Moreover, under clear skies the datasets are shown to agree for more than 90 % of the pixels. The mean cloud field, derived from the satellite cloud mask acquired during the Fennec flights, shows that areas of high surface albedo and orography are preferred sites for Saharan cloud cover, consistent with published theories. Cloud-top height retrievals however show large discrepancies over the region, which are ascribed to limiting factors such as the cloud horizontal extent, the derived effective cloud amount, and the absorption by mineral dust. The results of the CTH analysis presented here may also have further-reaching implications for the techniques employed by other satellite applications facilities across the world.

  8. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    NASA Astrophysics Data System (ADS)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  9. Accuracy Assessment of a Complex Building 3d Model Reconstructed from Images Acquired with a Low-Cost Uas

    NASA Astrophysics Data System (ADS)

    Oniga, E.; Chirilă, C.; Stătescu, F.

    2017-02-01

    Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner software, resulting a CAD (Computer Aided Design) model. The results showed the high potential of using low-cost UASs for building 3D model creation and if the building 3D model is created based on its characteristic points the accuracy is significantly improved.

  10. Deadly Fires Engulfing Madeira seen by NASA MISR

    NASA Image and Video Library

    2016-08-12

    A wildfire spread to the capital city of Funchal on the island of Madeira, an autonomous region of Portugal, over the nighttime hours of Tuesday, Aug. 9, 2016, with three deaths reported and hundreds of others hospitalized. Several homes and a luxury hotel have burned, and a thousand people have been evacuated. The three fatalities are reported to be elderly people who were unable to escape when their homes caught fire. The fire ignited Monday, Aug. 8, after several weeks of scorching temperatures topping 95 degrees Fahrenheit and very dry weather. The entire island is only 30 miles (48 kilometers) from end to end, which naturally makes protecting the island's 270,000 residents and many tourists more difficult. The MISR (Multi-angle Imaging SpectroRadiometer) instrument aboard NASA's Terra satellite passed directly over the island of Madeira on Wednesday, Aug. 10, 2016. The left image is a true-color image taken by MISR's 60-degree forward-pointing camera. This oblique view gives a better view of the smoke than a downward-pointing view. The island of Madeira is the only land within the field of view, and the smoke from the wildfire is being blown to the southwest. The city of Funchal is located on the southeastern coast of the island. MISR's nine cameras, each viewing Earth at a different angle, can be used to determine the height of clouds and smoke above the surface in much the same way that our two eyes, pointing in slightly different directions, give us depth perception. The right-hand image shows MISR's publically available standard cloud top height product. These data show that the main body of clouds is indeed very low, less than 0.6 miles (1 kilometer) above sea level, while the smoke plume is about 1.9 miles (3 kilometers) high at the source, dropping lower as it is blown to the southwest. A stereo "anaglyph" of this scene is also available at PIA20886. As can be seen from both the MISR height product and the 3D anaglyph, the isolated clouds to the south are much higher than either the low clouds or the plume. Interestingly, the low clouds drop to almost sea level and then die out near where the smoke is present. These data were acquired during Terra orbit 88524. http://photojournal.jpl.nasa.gov/catalog/PIA20887

  11. Cloud Computing E-Communication Services in the University Environment

    ERIC Educational Resources Information Center

    Babin, Ron; Halilovic, Branka

    2017-01-01

    The use of cloud computing services has grown dramatically in post-secondary institutions in the last decade. In particular, universities have been attracted to the low-cost and flexibility of acquiring cloud software services from Google, Microsoft and others, to implement e-mail, calendar and document management and other basic office software.…

  12. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of points (i) at a maximum distance (ii) around each core-point. Under this condition, seed points are said to be density-reachable by a core point delimiting a cluster around it. A chain of intermediate seed-points can connect contiguous clusters allowing clusters of arbitrary shape to be defined. The novelty of the proposed approach consists in the implementation of the DBSCAN 3D-module, where the xyz-coordinates identify each point and the density of points within a sphere is considered. This allows detecting volumetric features with a higher accuracy, depending only on actual sampling resolution. The approach is truly 3D and exploits all TLS measurements without the need of interpolation or data reduction. Using this method, enhanced geomorphological activity during the summer of 2015 in respect to the previous two years was observed. We attribute this result to the exceptionally high temperatures of that summer, which we deem responsible for accelerating the melting process at the rock glacier front and probably also increasing creep velocities. References: - Tonini, M. and Abellan, A. (2014). Rockfall detection from terrestrial LiDAR point clouds: A clustering approach using R. Journal of Spatial Information Sciences. Number 8, pp95-110 - Hennig, C. Package fpc: Flexible procedures for clustering. https://cran.r-project.org/web/packages/fpc/index.html, 2015. Accessed 2016-01-12.

  13. Optical and morphological properties of Cirrus clouds determined by the high spectral resolution lidar during FIRE

    NASA Technical Reports Server (NTRS)

    Grund, Christian John; Eloranta, Edwin W.

    1990-01-01

    Cirrus clouds reflect incoming solar radiation and trap outgoing terrestrial radiation; therefore, accurate estimation of the global energy balance depends upon knowledge of the optical and physical properties of these clouds. Scattering and absorption by cirrus clouds affect measurements made by many satellite-borne and ground based remote sensors. Scattering of ambient light by the cloud, and thermal emissions from the cloud can increase measurement background noise. Multiple scattering processes can adversely affect the divergence of optical beams propagating through these clouds. Determination of the optical thickness and the vertical and horizontal extent of cirrus clouds is necessary to the evaluation of all of these effects. Lidar can be an effective tool for investigating these properties. During the FIRE cirrus IFO in Oct. to Nov. 1986, the High Spectral Resolution Lidar (HSRL) was operated from a rooftop site on the campus of the University of Wisconsin at Madison, Wisconsin. Approximately 124 hours of fall season data were acquired under a variety of cloud optical thickness conditions. Since the IFO, the HSRL data set was expanded by more than 63.5 hours of additional data acquired during all seasons. Measurements are presented for the range in optical thickness and backscattering phase function of the cirrus clouds, as well as contour maps of extinction corrected backscatter cross sections indicating cloud morphology. Color enhanced images of range-time indicator (RTI) displays a variety of cirrus clouds with approximately 30 sec time resolution are presented. The importance of extinction correction on the interpretation of cloud height and structure from lidar observations of optically thick cirrus are demonstrated.

  14. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  15. a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud

    NASA Astrophysics Data System (ADS)

    Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.

    2018-04-01

    Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  16. 3D indoor modeling using a hand-held embedded system with multiple laser range scanners

    NASA Astrophysics Data System (ADS)

    Hu, Shaoxing; Wang, Duhu; Xu, Shike

    2016-10-01

    Accurate three-dimensional perception is a key technology for many engineering applications, including mobile mapping, obstacle detection and virtual reality. In this article, we present a hand-held embedded system designed for constructing 3D representation of structured indoor environments. Different from traditional vehicle-borne mobile mapping methods, the system presented here is capable of efficiently acquiring 3D data while an operator carrying the device traverses through the site. It consists of a simultaneous localization and mapping(SLAM) module, a 3D attitude estimate module and a point cloud processing module. The SLAM is based on a scan matching approach using a modern LIDAR system, and the 3D attitude estimate is generated by a navigation filter using inertial sensors. The hardware comprises three 2D time-flight laser range finders and an inertial measurement unit(IMU). All the sensors are rigidly mounted on a body frame. The algorithms are developed on the frame of robot operating system(ROS). The 3D model is constructed using the point cloud library(PCL). Multiple datasets have shown robust performance of the presented system in indoor scenarios.

  17. Use of Remotely Piloted Aircraft System (RPAS) in the analysis of historical landslide occurred in 1885 in the Rječina River Valley, Croatia

    NASA Astrophysics Data System (ADS)

    Dugonjić Jovančević, Sanja; Peranić, Josip; Ružić, Igor; Arbanas, Željko; Kalajžić, Duje; Benac, Čedomir

    2016-04-01

    Numerous instability phenomena have been recorded in the Rječina River Valley, near the City of Rijeka, in the past 250 years. Large landslides triggered by rainfall and floods, were registered on both sides of the Valley. Landslide inventory in the Valley was established based on recorded historical events and LiDAR imagery. The Rječina River is a typical karstic river 18.7km long, originating from the Gorski Kotar Mountains. The central part of the Valley, belongs to the dominant morphostructural unit that strikes in the northwest-southeast direction along the Rječina River. Karstified limestone rock mass is visible on the top of the slopes, while the flysch rock mass is present on the lower slopes and at the bottom of the Valley. Different types of movements can be distinguished in the area, such as the sliding of slope deposits over the flysch bedrock, rockfalls from limestone cliffs, sliding of huge rocky blocks, and active landslide on the north-eastern slope. The paper presents investigation of the dormant landslide located on the south-western slope of the Valley, which was recorded in 1870 in numerous historical descriptions. Due to intense and long-term rainfall, the landslide was reactivated in 1885, destroying and damaging houses in the eastern part of the Grohovo Village. To predict possible reactivation of the dormant landslide on the south-western side of the Valley, 2D stability back analyses were performed on the basis of landslide features, in order to approximate the position of sliding surface and landslide dimensions. The landslide topography is very steep, and the slope is covered by unstable debris material, so therefore hard to perform any terrestrial geodetic survey. Consumer-grade DJI Phantom 2 Remotely Piloted Aircraft System (RPAS) was used to provide the data about the present slope topography. The landslide 3D point cloud was derived from approximately 200 photographs taken with RPAS, using structure-from-motion (SfM) photogrammetry. Images were processed using the online Autodesk service "ReCap". Ground control points (GCP) collected with Total Station are identified on photorealistic point cloud and used for geo-referencing. Cloud Compare software was used for the point cloud processing. This study compared georeferenced landslide point cloud delivered from images with data acquired from laser scanning. RAPS and SfM application produced high accuracy landslide 3D point cloud, characterized by safe and quick data acquisition. Based on the adopted rock mass strength parameters, obtained from the back analysis, a stability analysis of the present slope situation was performed, and the present stability of the landslide body is determined. The unfavourable conditions and possible triggering factors such as saturation of the slope, caused by heavy rain and earthquake, were included in the analyses what enabled estimation of future landslide hazard and risk.

  18. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  19. Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera.

    PubMed

    Jiang, Yu; Li, Changying; Paterson, Andrew H; Sun, Shangpeng; Xu, Rui; Robertson, Jon

    2017-01-01

    Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage ( R 2 = 0.35-0.71) and growth rates in early canopy development stages ( R 2 = 0.29-0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.

  20. MODIS Views Variations in Cloud Types

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This MODIS image, centered over the Great Lakes region in North America, shows a variety of cloud types. The clouds at the top of the image, colored pink, are cold, high-level snow and ice clouds, while the neon green clouds are lower-level water clouds. Because different cloud types reflect and emit radiant energy differently, scientists can use MODIS' unique data set to measure the sizes of cloud particles and distinguish between water, snow, and ice clouds. This scene was acquired on Feb. 24, 2000, and is a red, green, blue composite of bands 1, 6, and 31 (0.66, 1.6, and 11.0 microns, respectively). Image by Liam Gumley, Space Science and Engineering Center, University of Wisconsin-Madison

  1. Terrestrial and Aerial Laser Scanning Data Integration Using Wavelet Analysis for the Purpose of 3D Building Modeling

    PubMed Central

    Kedzierski, Michal; Fryskowska, Anna

    2014-01-01

    Visualization techniques have been greatly developed in the past few years. Three-dimensional models based on satellite and aerial imagery are now being enhanced by models generated using Aerial Laser Scanning (ALS) data. The most modern of such scanning systems have the ability to acquire over 50 points per square meter and to register a multiple echo, which allows the reconstruction of the terrain together with the terrain cover. However, ALS data accuracy is less than 10 cm and the data is often incomplete: there is no information about ground level (in most scanning systems), and often around the facade or structures which have been covered by other structures. However, Terrestrial Laser Scanning (TLS) not only acquires higher accuracy data (1–5 cm) but is also capable of registering those elements which are incomplete or not visible using ALS methods (facades, complicated structures, interiors, etc.). Therefore, to generate a complete 3D model of a building in high Level of Details, integration of TLS and ALS data is necessary. This paper presents the wavelet-based method of processing and integrating data from ALS and TLS. Methods of choosing tie points to combine point clouds in different datum will be analyzed. PMID:25004157

  2. Terrestrial and aerial laser scanning data integration using wavelet analysis for the purpose of 3D building modeling.

    PubMed

    Kedzierski, Michal; Fryskowska, Anna

    2014-07-07

    Visualization techniques have been greatly developed in the past few years. Three-dimensional models based on satellite and aerial imagery are now being enhanced by models generated using Aerial Laser Scanning (ALS) data. The most modern of such scanning systems have the ability to acquire over 50 points per square meter and to register a multiple echo, which allows the reconstruction of the terrain together with the terrain cover. However, ALS data accuracy is less than 10 cm and the data is often incomplete: there is no information about ground level (in most scanning systems), and often around the facade or structures which have been covered by other structures. However, Terrestrial Laser Scanning (TLS) not only acquires higher accuracy data (1-5 cm) but is also capable of registering those elements which are incomplete or not visible using ALS methods (facades, complicated structures, interiors, etc.). Therefore, to generate a complete 3D model of a building in high Level of Details, integration of TLS and ALS data is necessary. This paper presents the wavelet-based method of processing and integrating data from ALS and TLS. Methods of choosing tie points to combine point clouds in different datum will be analyzed.

  3. Spatially explicit spectral analysis of point clouds and geospatial data

    USGS Publications Warehouse

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described, and its functionality illustrated with an example of a high-resolution bathymetric point cloud data collected with multibeam echosounder.

  4. Uav-Based Photogrammetric Point Clouds and Hyperspectral Imaging for Mapping Biodiversity Indicators in Boreal Forests

    NASA Astrophysics Data System (ADS)

    Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.

    2017-10-01

    Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.

  5. A Robotic Platform for Corn Seedling Morphological Traits Characterization

    PubMed Central

    Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu

    2017-01-01

    Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892

  6. A Robotic Platform for Corn Seedling Morphological Traits Characterization.

    PubMed

    Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu

    2017-09-12

    Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.

  7. Standoff detection of bioaerosols over wide area using a newly developed sensor combining a cloud mapper and a spectrometric LIF lidar

    NASA Astrophysics Data System (ADS)

    Buteau, Sylvie; Simard, Jean-Robert; Roy, Gilles; Lahaie, Pierre; Nadeau, Denis; Mathieu, Pierre

    2013-10-01

    A standoff sensor called BioSense was developed to demonstrate the capacity to map, track and classify bioaerosol clouds from a distant range and over wide area. The concept of the system is based on a two steps dynamic surveillance: 1) cloud detection using an infrared (IR) scanning cloud mapper and 2) cloud classification based on a staring ultraviolet (UV) Laser Induced Fluorescence (LIF) interrogation. The system can be operated either in an automatic surveillance mode or using manual intervention. The automatic surveillance operation includes several steps: mission planning, sensor deployment, background monitoring, surveillance, cloud detection, classification and finally alarm generation based on the classification result. One of the main challenges is the classification step which relies on a spectrally resolved UV LIF signature library. The construction of this library relies currently on in-chamber releases of various materials that are simultaneously characterized with the standoff sensor and referenced with point sensors such as Aerodynamic Particle Sizer® (APS). The system was tested at three different locations in order to evaluate its capacity to operate in diverse types of surroundings and various environmental conditions. The system showed generally good performances even though the troubleshooting of the system was not completed before initiating the Test and Evaluation (T&E) process. The standoff system performances appeared to be highly dependent on the type of challenges, on the climatic conditions and on the period of day. The real-time results combined with the experience acquired during the 2012 T & E allowed to identify future ameliorations and investigation avenues.

  8. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  9. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  10. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  11. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.

  12. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  13. Lightweight Electronic Camera for Research on Clouds

    NASA Technical Reports Server (NTRS)

    Lawson, Paul

    2006-01-01

    "Micro-CPI" (wherein "CPI" signifies "cloud-particle imager") is the name of a small, lightweight electronic camera that has been proposed for use in research on clouds. It would acquire and digitize high-resolution (3- m-pixel) images of ice particles and water drops at a rate up to 1,000 particles (and/or drops) per second.

  14. A Cut in the Clouds

    NASA Image and Video Library

    2017-12-08

    Like a ship carving its way through the sea, the South Georgia and South Sandwich Islands parted the clouds. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite acquired this natural-color image on February 2, 2017. The ripples in the clouds are known as gravity waves. NASA image by Jeff Schmaltz, LANCE/EOSDIS Rapid Response #nasagoddard

  15. Illuminating Northern California’s Active Faults

    USGS Publications Warehouse

    Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramon; Furlong, Kevin P.; Philips, David A.

    2009-01-01

    Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google EarthTM and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2)

  16. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  17. On the Quality of Point-Clouds Derived from Sfm-Photogrammetry Applied to UAS Imagery

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; James, T.

    2014-12-01

    Structure from Motion photogrammetry (SfM-photogrammetry) recently appeared in environmental sciences as an impressive tool allowing for the creation of topographic data from unstructured imagery. Several authors have tested the performance of SfM-photogrammetry vs that of TLS or dGPS. Whilst the initial results were very promising, there is currently a growing awareness that systematic deformations occur in DEMs and point-clouds derived from SfM-photogrammetry. Notably, some authors have identified a systematic doming manifest as an increasing error vs distance to the model centre. Simulation studies have confirmed that this error is due to errors in the calibration of camera distortions. This work aims to further investigate these effects in the presence of real data. We start with a dataset of 220 images acquired from a sUAS. After obtaining an initial self-calibration of the camera lens with Agisoft Photoscan, our method consists in applying systematic perturbations to 2 key lens parameters: Focal length and the k1 distortion parameter. For each perturbation, a point-cloud was produced and compared to LiDAR data. After deriving the mean and standard deviation of the error residuals (ɛ), a 2nd order polynomial surface was fitted to the errors point-cloud and the peak ɛ defined as the mathematical extrema of this surface. The results are presented in figure 1. This figure shows that lens perturbations can induce a range of errors with systematic behaviours. Peak ɛ is primarily controlled by K1 with a secondary control exerted by the focal length. These results allow us to state that: To limit the peak ɛ to 10cm, the K1 parameter must be calibrated to within 0.00025 and the focal length to within 2.5 pixels (≈10 µm). This level of calibration accuracy can only be achieved with proper design of image acquisition and control network geometry. Our main point is therefore that SfM is not a bypass to a rigorous and well-informed photogrammetric approach. Users of SfM-photogrammetry will still require basic training and knowledge in the fundamentals of photogrammetry. This is especially true for applications where very small topographic changes need to be detected or where gradient-sensitive processes need to be modelled.

  18. Use of Assisted Photogrammetry for Indoor and Outdoor Navigation Purposes

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Cazzaniga, N. E.; Pinto, L.

    2015-05-01

    Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users' location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required. The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m. For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error.

  19. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  20. From the air to digital landscapes: generating reach-scale topographic models from aerial photography in gravel-bed rivers

    NASA Astrophysics Data System (ADS)

    Vericat, Damià; Narciso, Efrén; Béjar, Maria; Tena, Álvaro; Brasington, James; Gibbins, Chris; Batalla, Ramon J.

    2014-05-01

    Digital Terrain Models are fundamental to characterise landscapes, to support numerical modelling and to monitor topographic changes. Recent advances in topography, remote sensing and geomatics are providing new opportunities to obtain high density/quality and rapid topographic data. In this paper we present an integrated methodology to rapidly obtain reach scale topographic models of fluvial systems. This methodology has been tested and is being applied to develop event-scale terrain models of a 11-km river reach in the highly dynamic Upper Cinca (NE Iberian Peninsula). This research is conducted in the background of the project MorphSed. The methodology integrates (a) the acquisition of dense point clouds of the exposed floodplain (aerial photography and digital photogrammetry); (b) the registration of all observations to the same coordinate system (using RTK-GPS surveyed GCPs); (c) the acquisition of bathymetric data (using aDcp measurements integrated with RTK-GPS); (d) the intelligent decimation of survey observations (using the open source TopCat toolkit) and, finally, (e) data fusion (elaborating Digital Elevation Models). In this paper special emphasis is given to the acquisition and registration of point clouds. 3D point clouds are obtained from aerial photography and by means of automated digital photogrammetry. Aerial photographs are taken at 275 meters above the ground by means of a SLR digital camera manually operated from an autogyro. Four flight paths are defined in order to cover the 11 km long and 500 meters wide river reach. A total of 45 minutes are required to fly along these paths. Camera has been previously calibrated with the objective to ensure image resolution at around 5 cm. A total of 220 GCPs are deployed and RTK-GPS surveyed before the flight is conducted. Two people and one full workday are necessary to deploy and survey the full set of GCPs. Field data acquisition may be finalised in less than 2 days. Structure-from-Motion is subsequently applied in the lab using Agisoft PhotoScan, photographs are aligned and a 3d point cloud is generated. GCPs are used to geo-register all point clouds. This task may be time consuming since GCPs need to be identified in at least two of the pictures. A first automatic identification of GCPs positions is performed in the rest of the photos, although user supervision is necessary. Preliminary results show as geo-registration errors between 0.08 and and 0.10 meters can be obtained. The number of GCPs is being degraded and the quality of the point cloud assessed based on check points (the extracted GCPs). A critical analysis of GCPs density and scene locations is being performed (results in preparation). The results show that automated digital photogrammetry may provide new opportunities in the acquisition of topographic data at multiple temporal and spatial scales, being competitive with other more expensive techniques that, in turn, may require much more time to acquire field observations. SfM offers new opportunities to develop event-scale terrain models of fluvial systems suitable for hydraulic modelling and to study topographic change in highly dynamic environments.

  1. D Building FAÇADE Reconstruction Using Handheld Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Sadeghi, F.; Arefi, H.; Fallah, A.; Hahn, M.

    2015-12-01

    3D The three dimensional building modelling has been an interesting topic of research for decades and it seems that photogrammetry methods provide the only economic means to acquire truly 3D city data. According to the enormous developments of 3D building reconstruction with several applications such as navigation system, location based services and urban planning, the need to consider the semantic features (such as windows and doors) becomes more essential than ever, and therefore, a 3D model of buildings as block is not any more sufficient. To reconstruct the façade elements completely, we employed the high density point cloud data that obtained from the handheld laser scanner. The advantage of the handheld laser scanner with capability of direct acquisition of very dense 3D point clouds is that there is no need to derive three dimensional data from multi images using structure from motion techniques. This paper presents a grammar-based algorithm for façade reconstruction using handheld laser scanner data. The proposed method is a combination of bottom-up (data driven) and top-down (model driven) methods in which, at first the façade basic elements are extracted in a bottom-up way and then they are served as pre-knowledge for further processing to complete models especially in occluded and incomplete areas. The first step of data driven modelling is using the conditional RANSAC (RANdom SAmple Consensus) algorithm to detect façade plane in point cloud data and remove noisy objects like trees, pedestrians, traffic signs and poles. Then, the façade planes are divided into three depth layers to detect protrusion, indentation and wall points using density histogram. Due to an inappropriate reflection of laser beams from glasses, the windows appear like holes in point cloud data and therefore, can be distinguished and extracted easily from point cloud comparing to the other façade elements. Next step, is rasterizing the indentation layer that holds the windows and doors information. After rasterization process, the morphological operators are applied in order to remove small irrelevant objects. Next, the horizontal splitting lines are employed to determine floors and vertical splitting lines are employed to detect walls, windows, and doors. The windows, doors and walls elements which are named as terminals are clustered during classification process. Each terminal contains a special property as width. Among terminals, windows and doors are named the geometry tiles in definition of the vocabularies of grammar rules. Higher order structures that inferred by grouping the tiles resulted in the production rules. The rules with three dimensional modelled façade elements constitute formal grammar that is named façade grammar. This grammar holds all the information that is necessary to reconstruct façades in the style of the given building. Thus, it can be used to improve and complete façade reconstruction in areas with no or limited sensor data. Finally, a 3D reconstructed façade model is generated that the accuracy of its geometry size and geometry position depends on the density of the raw point cloud.

  2. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  3. Using street view imagery for 3-D survey of rock slope failures

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Abellán, Antonio; Nicolet, Pierrick; Penna, Ivanna; Chanut, Marie-Aurélie; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-12-01

    We discuss here different challenges and limitations of surveying rock slope failures using 3-D reconstruction from image sets acquired from street view imagery (SVI). We show how rock slope surveying can be performed using two or more image sets using online imagery with photographs from the same site but acquired at different instances. Three sites in the French alps were selected as pilot study areas: (1) a cliff beside a road where a protective wall collapsed, consisting of two image sets (60 and 50 images in each set) captured within a 6-year time frame; (2) a large-scale active landslide located on a slope at 250 m from the road, using seven image sets (50 to 80 images per set) from five different time periods with three image sets for one period; (3) a cliff over a tunnel which has collapsed, using two image sets captured in a 4-year time frame. The analysis include the use of different structure from motion (SfM) programs and a comparison between the extracted photogrammetric point clouds and a lidar-derived mesh that was used as a ground truth. Results show that both landslide deformation and estimation of fallen volumes were clearly identified in the different point clouds. Results are site- and software-dependent, as a function of the image set and number of images, with model accuracies ranging between 0.2 and 3.8 m in the best and worst scenario, respectively. Although some limitations derived from the generation of 3-D models from SVI were observed, this approach allowed us to obtain preliminary 3-D models of an area without on-field images, allowing extraction of the pre-failure topography that would not be available otherwise.

  4. Assessing land leveling needs and performance with unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Enciso, Juan; Jung, Jinha; Chang, Anjin; Chavez, Jose Carlos; Yeom, Junho; Landivar, Juan; Cavazos, Gabriel

    2018-01-01

    Land leveling is the initial step for increasing irrigation efficiencies in surface irrigation systems. The objective of this paper was to evaluate potential utilization of an unmanned aerial system (UAS) equipped with a digital camera to map ground elevations of a grower's field and compare them with field measurements. A secondary objective was to use UAS data to obtain a digital terrain model before and after land leveling. UAS data were used to generate orthomosaic images and three-dimensional (3-D) point cloud data by applying the structure for motion algorithm to the images. Ground control points (GCPs) were established around the study area, and they were surveyed using a survey grade dual-frequency GPS unit for accurate georeferencing of the geospatial data products. A digital surface model (DSM) was then generated from the 3-D point cloud data before and after laser leveling to determine the topography before and after the leveling. The UAS-derived DSM was compared with terrain elevation measurements acquired from land surveying equipment for validation. Although 0.3% error or root mean square error of 0.11 m was observed between UAS derived and ground measured ground elevation data, the results indicated that UAS could be an efficient method for determining terrain elevation with an acceptable accuracy when there are no plants on the ground, and it can be used to assess the performance of a land leveling project.

  5. Remote sensing of smoke, clouds, and fire using AVIRIS data

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Kaufman, Yorman J.; Green, Robert O.

    1993-01-01

    Clouds remain the greatest element of uncertainty in predicting global climate change. During deforestation and biomass burning processes, a variety of atmospheric gases, including CO2 and SO2, and smoke particles are released into the atmosphere. The smoke particles can have important effects on the formation of clouds because of the increased concentration of cloud condensation nuclei. They can also affect cloud albedo through changes in cloud microphysical properties. Recently, great interest has arisen in understanding the interaction between smoke particles and clouds. We describe our studies of smoke, clouds, and fire using the high spatial and spectral resolution data acquired with the NASA/JPL Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).

  6. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  7. Topographic lidar survey of Dauphin Island, Alabama and Chandeleur, Stake, Grand Gosier and Breton Islands, Louisiana, July 12-14, 2013

    USGS Publications Warehouse

    Guy, Kristy K.; Plant, Nathaniel G.

    2014-01-01

    This Data Series Report contains lidar elevation data collected on July 12 and 14, 2013, for Dauphin Island, Alabama, and Chandeleur, Stake, Grand Gosier and Breton Islands, Louisiana. Classified point cloud data—data points described in three dimensions—in lidar data exchange format (LAS) and bare earth digital elevation models (DEMs) in ERDAS Imagine raster format (IMG) are available as downloadable files. Photo Science, Inc., was contracted by the U.S. Geological Survey (USGS) to collect and process these data. The lidar data were acquired at a horizontal spacing (or nominal pulse spacing) of 1 meter (m) or less. The USGS surveyed points within the project area from July 14–23, 2013, for use in ground control and accuracy assessment. Photo Science, Inc., calculated a vertical root mean square error (RMSEz) of 0.012 m by comparing 10 surveyed points to an interpolated elevation surface of unclassified lidar data. The USGS also checked the data using 80 surveyed points and unclassified lidar point elevation data and found an RMSEz of 0.073 m. The project specified an RMSEz of 0.0925 m or less. The lidar survey was acquired to document the short- and long-term changes of several different barrier island systems. Specifically, this survey supports detailed studies of Chandeleur and Dauphin Islands that resolve annual changes in beaches, berms and dunes associated with processes driven by storms, sea-level rise, and even human restoration activities. These lidar data are available to Federal, State and local governments, emergency-response officials, resource managers, and the general public.

  8. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  9. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    NASA Astrophysics Data System (ADS)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  10. Arctic Atmospheric Measurements Using Manned and Unmanned Aircraft, Tethered Balloons, and Ground-Based Systems at U.S. DOE ARM Facilities on the North Slope Of Alaska

    NASA Astrophysics Data System (ADS)

    Ivey, M.; Dexheimer, D.; Roesler, E. L.; Hillman, B. R.; Hardesty, J. O.

    2016-12-01

    The U.S. Department of Energy (DOE) provides scientific infrastructure and data to the international Arctic research community via research sites located on the North Slope of Alaska and an open data archive maintained by the ARM program. In 2016, DOE continued investments in improvements to facilities and infrastructure at Oliktok Point Alaska to support operations of ground-based facilities and unmanned aerial systems for science missions in the Arctic. The Third ARM Mobile Facility, AMF3, now deployed at Oliktok Point, was further expanded in 2016. Tethered instrumented balloons were used at Oliktok to make measurements of clouds in the boundary layer including mixed-phase clouds and to compare measurements with those from the ground and from unmanned aircraft operating in the airspace above AMF3. The ARM facility at Oliktok Point includes Special Use Airspace. A Restricted Area, R-2204, is located at Oliktok Point. Roughly 4 miles in diameter, it facilitates operations of tethered balloons and unmanned aircraft. R-2204 and a new Warning Area north of Oliktok, W-220, are managed by Sandia National Laboratories for DOE Office of Science/BER. These Special Use Airspaces have been successfully used to launch and operate unmanned aircraft over the Arctic Ocean and in international airspace north of Oliktok Point.A steady progression towards routine operations of unmanned aircraft and tethered balloon systems continues at Oliktok. Small unmanned aircraft (DataHawks) and tethered balloons were successfully flown at Oliktok starting in June of 2016. This poster will discuss how principal investigators may apply for use of these Special Use Airspaces, acquire data from the Third ARM Mobile Facility, or bring their own instrumentation for deployment at Oliktok Point, Alaska.

  11. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  12. Systemic Approach to Elevation Data Acquisition for Geophysical Survey Alignments in Hilly Terrains Using UAVs

    NASA Astrophysics Data System (ADS)

    Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.

    2018-04-01

    This study is about systematic approach to photogrammetric survey that is applicable in the extraction of elevation data for geophysical surveys in hilly terrains using Unmanned Aerial Vehicles (UAVs). The outcome will be to acquire high-quality geophysical data from areas where elevations vary by locating the best survey lines. The study area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. Seismic refraction surveys were carried out for the modelling of the subsurface for detailed site investigations. Study were carried out to identify the accuracy of the digital elevation model (DEM) produced from an UAV. At 100 m altitude (flying height), over 135 overlapping images were acquired using a DJI Phantom 3 quadcopter. All acquired images were processed for automatic 3D photo-reconstruction using Agisoft PhotoScan digital photogrammetric software, which was applied to all photogrammetric stages. The products generated included a 3D model, dense point cloud, mesh surface, digital orthophoto, and DEM. In validating the accuracy of the produced DEM, the coordinates of the selected ground control point (GCP) of the survey line in the imaging area were extracted from the generated DEM with the aid of Global Mapper software. These coordinates were compared with the GCPs obtained using a real-time kinematic global positioning system. The maximum percentage of difference between GCP’s and photogrammetry survey is 13.3 %. UAVs are suitable for acquiring elevation data for geophysical surveys which can save time and cost.

  13. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  14. Solubilization of phenanthrene above cloud point of Brij 30: a new application in biodegradation.

    PubMed

    Pantsyrnaya, T; Delaunay, S; Goergen, J L; Guseva, E; Boudrant, J

    2013-06-01

    In the present study a new application of solubilization of phenanthrene above cloud point of Brij 30 in biodegradation was developed. It was shown that a temporal solubilization of phenanthrene above cloud point of Brij 30 (5wt%) permitted to obtain a stable increase of the solubility of phenanthrene even when the temperature was decreased to culture conditions of used microorganism Pseudomonas putida (28°C). A higher initial concentration of soluble phenanthrene was obtained after the cloud point treatment: 200 against 120μM without treatment. All soluble phenanthrene was metabolized and a higher final concentration of its major metabolite - 1-hydroxy-2-naphthoic acid - (160 against 85μM) was measured in the culture medium in the case of a preliminary cloud point treatment. Therefore a temporary solubilization at cloud point might have a perspective application in the enhancement of biodegradation of polycyclic aromatic hydrocarbons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  16. Using a Remotely Piloted Aircraft System (RPAS) to analyze the stability of a natural rock slope

    NASA Astrophysics Data System (ADS)

    Salvini, Riccardo; Esposito, Giuseppe; Mastrorocco, Giovanni; Seddaiu, Marcello

    2016-04-01

    This paper describes the application of a rotary wing RPAS for monitoring the stability of a natural rock slope in the municipality of Vecchiano (Pisa, Italy). The slope under investigation is approximately oriented NNW-SSE and has a length of about 320 m; elevation ranges from about 7 to 80 m a.s.l.. The hill consists of stratified limestone, somewhere densely fractured, with dip direction predominantly oriented in a normal way respect to the slope. Fracture traces are present in variable lengths, from decimetre to metre, and penetrate inward the rock versant with thickness difficult to estimate, often exceeding one meter in depth. The intersection between different fracture systems and the slope surface generates rocky blocks and wedges of variable size that may be subject to phenomena of gravitational instability (with reference to the variation of hydraulic and dynamic conditions). Geometrical and structural info about the rock mass, necessary to perform the analysis of the slope stability, were obtained in this work from geo-referenced 3D point clouds acquired using photogrammetric and laser scanning techniques. In particular, a terrestrial laser scanning was carried out from two different point of view using a Leica Scanstation2. The laser survey created many shadows in the data due to the presence of vegetation in the lower parts of the slope and limiting the feasibility of geo-structural survey. To overcome such a limitation, we utilized a rotary wing Aibotix Aibot X6 RPAS geared with a Nikon D3200 camera. The drone flights were executed in manual modality and the images were acquired, according to the characteristics of the outcrops, under different acquisition angles. Furthermore, photos were captured very close to the versant (a few meters), allowing to produce a dense 3D point cloud (about 80 Ma points) by the image processing. A topographic survey was carried out in order to guarantee the necessary spatial accuracy to the process of images exterior orientation. The coordinates of GCPs were calculated through the post-processing of data collected by using two GPS receivers, operating in static modality, and a Total Station. The photogrammetric processing of image blocks allowed us to create the 3D point cloud, DTM, orthophoto, and 3D textured model with high level of cartographic detail. Discontinuities were deterministically characterized in terms of attitude, persistence, and spacing. Moreover, the main discontinuity sets were identified through a density analysis of attitudes in stereographic projection. In addition, the size and shape of potentially unstable blocks identified along the rock slope were measured. Finally, using additional data from traditional engineering-geological surveys executed in accessible outcrops, the kinematic and dynamic stability analysis of the rocky slope was performed. Results from this step have indicated the deterministic safety factors of rock blocks and wedges, and will be used by local Authorities to plan the protection works for safety guarantee. Results from this application show the great advantage of modern RPAS that can be successfully applied for the analysis of sub-vertical rocky slopes, especially in areas either difficult to access with traditional techniques or masked by the presence of vegetation. KEY WORDS: 3D point cloud, RPAS photogrammetry, Terrestrial laser scanning, Rock slope, Fracture mapping, Stability analysis

  17. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.

    PubMed

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-12-30

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  18. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    PubMed Central

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-01-01

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846

  19. Biotoxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system.

    PubMed

    Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei

    2017-06-01

    A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.

  20. Stereo Cloud Height and Wind Determination Using Measurements from a Single Focal Plane

    NASA Astrophysics Data System (ADS)

    Demajistre, R.; Kelly, M. A.

    2014-12-01

    We present here a method for extracting cloud heights and winds from an aircraft or orbital platform using measurements from a single focal plane, exploiting the motion of the platform to provide multiple views of the cloud tops. To illustrate this method we use data acquired during aircraft flight tests of a set of simple stereo imagers that are well suited to this purpose. Each of these imagers has three linear arrays on the focal plane, one looking forward, one looking aft, and one looking down. Push-broom images from each of these arrays are constructed, and then a spatial correlation analysis is used to deduce the delays and displacements required for wind and cloud height determination. We will present the algorithms necessary for the retrievals, as well as the methods used to determine the uncertainties of the derived cloud heights and winds. We will apply the retrievals and uncertainty determination to a number of image sets acquired by the airborne sensors. We then generalize these results to potential space based observations made by similar types of sensors.

  1. Semantic Labelling of Ultra Dense Mls Point Clouds in Urban Road Corridors Based on Fusing Crf with Shape Priors

    NASA Astrophysics Data System (ADS)

    Yao, W.; Polewski, P.; Krzystek, P.

    2017-09-01

    In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.

  2. Uranus’ cloud structure and seasonal variability from Gemini-North and UKIRT observations

    NASA Astrophysics Data System (ADS)

    Irwin, P. G. J.; Teanby, N. A.; Davis, G. R.; Fletcher, L. N.; Orton, G. S.; Tice, D.; Kyffin, A.

    2011-03-01

    Observations of Uranus were made in September 2009 with the Gemini-North telescope in Hawaii, using both the NIFS and NIRI instruments. Observations were acquired in Adaptive Optics mode and have a spatial resolution of approximately 0.1″. NIRI images were recorded with three spectral filters to constrain the overall appearance of the planet: J, H-continuum and CH4(long), and long slit spectroscopy measurements were also made (1.49-1.79 μm) with the entrance slit aligned on Uranus’ central meridian. To acquire spectra from other points on the planet, the NIFS instrument was used and its 3″ × 3″ field of view stepped across Uranus’ disc. These observations were combined to yield complete images of Uranus at 2040 wavelengths between 1.476 and 1.803 μm. The observed spectra along Uranus central meridian were analysed with the NEMESIS retrieval tool and used to infer the vertical/latitudinal variation in cloud optical depth. We find that the 2009 Gemini data perfectly complement our observations/conclusions from UKIRT/UIST observations made in 2006-2008 and show that the north polar zone at 45°N has continued to steadily brighten while that at 45°S has continued to fade. The improved spatial resolution of the Gemini observations compared with the non-AO UKIRT/UIST data removes some of the earlier ambiguities with our previous analyses and shows that the opacity of clouds deeper than the 2-bar level does indeed diminish towards the poles and also reveals a darkening of the deeper cloud deck near the equator, perhaps coinciding with a region of subduction. We find that the clouds at 45°N,S lie at slightly lower pressures than the clouds at more equatorial latitudes, which suggests that they might possibly be composed of a different condensate, presumably CH4 ice, rather than H2S or NH3 ice, which is assumed for the deeper cloud. In addition, analysis of the centre-to-limb curves of both the Gemini/NIFS and earlier UKIRT/UIST IFU observations shows that the main cloud deck has a well-defined top, and also allows us to better constrain the particle scattering properties. Overall, Uranus appeared to be less convectively active in 2009 than in the previous 3 years, which suggests that now the northern spring equinox (which occurred in 2007) is passed the atmosphere is settling back into the quiescent state seen by Voyager 2 in 1986. However, a number of discrete clouds were still observed, with one at 15°N found to lie near the 500 mb level, while another at 30°N, was seen to be much higher at near the 200 mb level. Such high clouds are assumed to be composed of CH4 ice.

  3. Integrated Survey Procedures for the Virtual Reading and Fruition of Historical Buildings

    NASA Astrophysics Data System (ADS)

    Scandurra, S.; Pulcrano, M.; Cirillo, V.; Campi, M.; di Luggo, A.; Zerlenga, O.

    2018-05-01

    This paper presents the developments of research related to the integration of digital survey methodologies with reference to image-based and range-based technologies. Starting from the processing of point clouds, the data were processed for both the geometric interpretation of the space as well as production of three-dimensional models that describe the constitutive and morphological relationships. The subject of the study was the church of San Carlo all'Arena in Naples (Italy), with a HBIM model being produced that is semantically consistent with the real building. Starting from the data acquired, a visualization system was created for the virtual exploration of the building.

  4. Americas from the Moon

    NASA Image and Video Library

    2010-09-15

    The western hemisphere of our home planet Earth. North (upper left), Central, and South America (lower right) were nicely free of clouds when LRO pointed home on 9 August 2010 to acquire this beautiful view. LROC NAC E136013771. As LRO orbits the Moon every two hours sending down a stream of science data, it is easy to forget how close the Moon is to the Earth. The average distance between the two heavenly bodies is just 384,399 km (238,854 miles). Check your airline frequent flyer totals, perhaps you have already flown the distance to the Moon and back on a single airline. http://photojournal.jpl.nasa.gov/catalog/PIA13519

  5. Impact of Surface Active Ionic Liquids on the Cloud Points of Nonionic Surfactants and the Formation of Aqueous Micellar Two-Phase Systems.

    PubMed

    Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P

    2017-09-21

    Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP < 1 / 2 ) lead to the formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.

  6. Cloud classification in polar regions using AVHRR textural and spectral signatures

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Weger, R. C.; Christopher, S. A.; Kuo, K. S.; Carsey, F. D.

    1990-01-01

    Arctic clouds and ice-covered surfaces are classified on the basis of textural and spectral features obtained with AVHRR 1.1-km spatial resolution imagery over the Beaufort Sea during May-October, 1989. Scenes were acquired about every 5 days, for a total of 38 cases. A list comprising 20 arctic-surface and cloud classes is compiled using spectral measures defined by Garand (1988).

  7. Jovian Stormy Weather

    NASA Image and Video Library

    2017-02-17

    NASA's Juno spacecraft soared directly over Jupiter's south pole when JunoCam acquired this image on February 2, 2017 at 6:06 a.m. PT (9:06 a.m. ET), from an altitude of about 62,800 miles (101,000 kilometers) above the cloud tops. From this unique vantage point we see the terminator (where day meets night) cutting across the Jovian south polar region's restless, marbled atmosphere with the south pole itself approximately in the center of that border. The terminator is offset a bit because it's summer in Jupiter's southern hemisphere. However, the tilt of Jupiter's spin axis is only 3 degrees, much less than Earth's 23.5-degree tilt. This image was processed by citizen scientist John Landino. This enhanced color version highlights the bright high clouds and numerous meandering oval storms. Away from the polar region, the seeming chaos of Jupiter's polar region gives way to the more familiar color banding that Jupiter is known for. http://photojournal.jpl.nasa.gov/catalog/PIA21382

  8. Design of relative motion and attitude profiles for three-dimensional resident space object imaging with a laser rangefinder

    NASA Astrophysics Data System (ADS)

    Nayak, M.; Beck, J.; Udrea, B.

    This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.

  9. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    NASA Astrophysics Data System (ADS)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  10. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  11. Clouds on Neptune: Motions, Evolution, and Structure

    NASA Technical Reports Server (NTRS)

    Sromovsky, Larry A.; Morgan, Thomas (Technical Monitor)

    2001-01-01

    The aims of our original proposal were these: (1) improving measurements of Neptune's circulation, (2) understanding the spatial distribution of cloud features, (3) discovery of new cloud features and understanding their evolutionary process, (4) understanding the vertical structure of zonal cloud patterns, (5) defining the structure of discrete cloud features, and (6) defining the near IR albedo and light curve of Triton. Towards these aims we proposed analysis of existing 1996 groundbased NSFCAM/IRTF observations and nearly simultaneous WFPC2 observations from the Hubble Space Telescope. We also proposed to acquire new observations from both HST and the IRTF.

  12. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  13. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  14. Parameterized approximation of lacunarity functions derived from airborne laser scanning point clouds of forested areas

    NASA Astrophysics Data System (ADS)

    Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann

    2017-04-01

    Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.

  15. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  16. Evaluation and Applications of Cloud Climatologies from CALIOP

    NASA Technical Reports Server (NTRS)

    Winker, David; Getzewitch, Brian; Vaughan, Mark

    2008-01-01

    Clouds have a major impact on the Earth radiation budget and differences in the representation of clouds in global climate models are responsible for much of the spread in predicted climate sensitivity. Existing cloud climatologies, against which these models can be tested, have many limitations. The CALIOP lidar, carried on the CALIPSO satellite, has now acquired over two years of nearly continuous cloud and aerosol observations. This dataset provides an improved basis for the characterization of 3-D global cloudiness. Global average cloud cover measured by CALIOP is about 75%, significantly higher than for existing cloud climatologies due to the sensitivity of CALIOP to optically thin cloud. Day/night biases in cloud detection appear to be small. This presentation will discuss detection sensitivity and other issues associated with producing a cloud climatology, characteristics of cloud cover statistics derived from CALIOP data, and applications of those statistics.

  17. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  18. Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals

    NASA Astrophysics Data System (ADS)

    Peter, M.; Jafri, S. R. U. N.; Vosselman, G.

    2017-09-01

    Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  19. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  20. 4D-SFM Photogrammetry for Monitoring Sediment Dynamics in a Debris-Flow Catchment: Software Testing and Results Comparison

    NASA Astrophysics Data System (ADS)

    Cucchiaro, S.; Maset, E.; Fusiello, A.; Cazorzi, F.

    2018-05-01

    In recent years, the combination of Structure-from-Motion (SfM) algorithms and UAV-based aerial images has revolutionised 3D topographic surveys for natural environment monitoring, offering low-cost, fast and high quality data acquisition and processing. A continuous monitoring of the morphological changes through multi-temporal (4D) SfM surveys allows, e.g., to analyse the torrent dynamic also in complex topography environment like debris-flow catchments, provided that appropriate tools and procedures are employed in the data processing steps. In this work we test two different software packages (3DF Zephyr Aerial and Agisoft Photoscan) on a dataset composed of both UAV and terrestrial images acquired on a debris-flow reach (Moscardo torrent - North-eastern Italian Alps). Unlike other papers in the literature, we evaluate the results not only on the raw point clouds generated by the Structure-from- Motion and Multi-View Stereo algorithms, but also on the Digital Terrain Models (DTMs) created after post-processing. Outcomes show differences between the DTMs that can be considered irrelevant for the geomorphological phenomena under analysis. This study confirms that SfM photogrammetry can be a valuable tool for monitoring sediment dynamics, but accurate point cloud post-processing is required to reliably localize geomorphological changes.

  1. Use of Vertical Aerial Images for Semi-Oblique Mapping

    NASA Astrophysics Data System (ADS)

    Poli, D.; Moe, K.; Legat, K.; Toschi, I.; Lago, F.; Remondino, F.

    2017-05-01

    The paper proposes a methodology for the use of the oblique sections of images from large-format photogrammetric cameras, by exploiting the effect of the central perspective geometry in the lateral parts of the nadir images ("semi-oblique" images). The point of origin of the investigation was the execution of a photogrammetric flight over Norcia (Italy), which was seriously damaged after the earthquake of 30/10/2016. Contrary to the original plan of oblique acquisitions, the flight was executed on 15/11/2017 using an UltraCam Eagle camera with focal length 80 mm, and combining two flight plans, rotated by 90º ("crisscross" flight). The images (GSD 5 cm) were used to extract a 2.5D DSM cloud, sampled to a XY-grid size of 2 GSD, a 3D point clouds with a mean spatial resolution of 1 GSD and a 3D mesh model at a resolution of 10 cm of the historic centre of Norcia for a quantitative assessment of the damages. From the acquired nadir images the "semi-oblique" images (forward, backward, left and right views) could be extracted and processed in a modified version of GEOBLY software for measurements and restitution purposes. The potential of such semi-oblique image acquisitions from nadir-view cameras is hereafter shown and commented.

  2. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  3. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously

  4. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  5. A Modular Approach to Video Designation of Manipulation Targets for Manipulators

    DTIC Science & Technology

    2014-05-12

    side view of a ray going through a point cloud of a water bottle sitting on the ground. The bottom left image shows the same point cloud after it has...System (ROS), Point Cloud Library (PCL), and OpenRAVE were used to a great extent to help promote reusability of the code developed during this

  6. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  7. Stormy Day at Jupiter

    NASA Image and Video Library

    2017-05-25

    Small bright clouds dot Jupiter's entire south tropical zone in this image acquired by JunoCam on NASA's Juno spacecraft on May 19, 2017, at an altitude of 7,990 miles (12,858 kilometers). Although the bright clouds appear tiny in this vast Jovian cloudscape, they actually are cloud towers roughly 30 miles (50 kilometers) wide and 30 miles (50 kilometers) high that cast shadows on the clouds below. On Jupiter, clouds this high are almost certainly composed of water and/or ammonia ice, and they may be sources of lightning. This is the first time so many cloud towers have been visible, possibly because the late-afternoon lighting is particularly good at this geometry. https://photojournal.jpl.nasa.gov/catalog/PIA21647

  8. Study of the transport parameters of cloud lightning plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Z. S.; Yuan, P.; Zhao, N.

    2010-11-15

    Three spectra of cloud lightning have been acquired in Tibet (China) using a slitless grating spectrograph. The electrical conductivity, the electron thermal conductivity, and the electron thermal diffusivity of the cloud lightning, for the first time, are calculated by applying the transport theory of air plasma. In addition, we investigate the change behaviors of parameters (the temperature, the electron density, the electrical conductivity, the electron thermal conductivity, and the electron thermal diffusivity) in one of the cloud lightning channels. The result shows that these parameters decrease slightly along developing direction of the cloud lightning channel. Moreover, they represent similar suddenmore » change behavior in tortuous positions and the branch of the cloud lightning channel.« less

  9. Using LIDAR and UAV-derived point clouds to evaluate surface roughness in a gravel-bed braided river (Vénéon river, French Alps)

    NASA Astrophysics Data System (ADS)

    Vázquez Tarrío, Daniel; Borgniet, Laurent; Recking, Alain; Liebault, Frédéric; Vivier, Marie

    2016-04-01

    The present research is focused on the Vénéon river at Plan du Lac (Massif des Ecrins, France), an alpine braided gravel bed stream with a glacio-nival hydrological regime. It drains a catchment area of 316 km2. The present research is focused in a 2.5 km braided reach placed immediately upstream of a small hydropower dam. An airbone LIDAR survey was accomplished in October, 2014 by EDF (the company managing the small hydropower dam), and data coming from this LIDAR survey were available for the present research. Point density of the LIDAR-derived 3D-point cloud was between 20-50 points/m2, with a vertical precision of 2-3 cm over flat surfaces. Moreover, between April and Juin, 2015, we carried out a photogrammetrical campaign based in aerial images taken with an UAV-drone. The UAV-derived point-cloud has a point density of 200-300 points/m2, and a vertical precision over flat control surfaces comparable to that of the LIDAR point cloud (2-3 cm). Simultaneously to the UAV campaign, we took several Wolman samples with the aim of characterizing the grain size distribution of bed sediment. Wolman samples were taken following a geomorphological criterion (unit bars, head/tail of compound bars). Furthermore, some of the Wolman samples were repeated with the aim of defining the uncertainty of our sampling protocol. LIDAR and UAV-derived point clouds were treated in order to check whether both point-clouds were correctly co-aligned. After that, we estimated bed roughness using the detrended standard deviation of heights, in a 40-cm window. For all this data treatment we used CloudCompare. Then, we measured the distribution of roughness in the same geomorphological units where we took the Wolman samples, and we compared with the grain size distributions measured in the field: differences between UAV-point cloud roughness distributions and measured-grain size distribution (~1-2 cm) are in the same order of magnitude of the differences found between the repeated Wolman samples (~0.5-1.5 cm). Differences with LIDAR-derived roughness distributions are only slightly higher, which could be due to the lower point density of the LIDAR point clouds.

  10. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  11. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  12. Parametric Accuracy: Building Information Modeling Process Applied to the Cultural Heritage Preservation

    NASA Astrophysics Data System (ADS)

    Garagnani, S.; Manferdini, A. M.

    2013-02-01

    Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.

  13. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  14. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  15. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

  16. Characterizing Sorghum Panicles using 3D Point Clouds

    NASA Astrophysics Data System (ADS)

    Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.

    2017-12-01

    To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.

  17. Measuring the Internal Structure and Physical Conditions in Star and Planet Forming Clouds Core: Toward a Quantitative Description of Cloud Evolution

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2005-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process. During the second year of this grant, progress toward these goals is discussed.

  18. Measuring the Internal Structure and Physical Conditions in Star and Planet Forming Clouds Cores: Towards a Quantitative Description of Cloud Evolution

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2004-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to determine the detailed initial conditions for star formation from quantitative measurements of the internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process.

  19. Liquid water content variation with altitude in clouds over Europe

    NASA Astrophysics Data System (ADS)

    Andreea, Boscornea; Sabina, Stefan

    2013-04-01

    Cloud water content is one of the most fundamental measurements in cloud physics. Knowledge of the vertical variability of cloud microphysical characteristics is important for a variety of reasons. The profile of liquid water content (LWC) partially governs the radiative transfer for cloudy atmospheres, LWC profiles improves our understanding of processes acting to form and maintain cloud systems and may lead to improvements in the representation of clouds in numerical models. Presently, in situ airborne measurements provide the most accurate information about cloud microphysical characteristics. This information can be used for verification of both numerical models and cloud remote sensing techniques. The aim of this paper was to analyze the liquid water content (LWC) measurements in clouds, in time of the aircraft flights. The aircraft and its platform ATMOSLAB - Airborne Laboratory for Environmental Atmospheric Research is property of the National Institute for Aerospace Research "Elie Carafoli" (INCAS), Bucharest, Romania. The airborne laboratory equipped for special research missions is based on a Hawker Beechcraft - King Air C90 GTx aircraft and is equipped with a sensors system CAPS - Cloud, Aerosol and Precipitation Spectrometer (30 bins, 0.51-50 m). The processed and analyzed measurements are acquired during 4 flights from Romania (Bucharest, 44°25'57″N 26°06'14″E) to Germany (Berlin 52°30'2″N 13°23'56″E) above the same region of Europe. The flight path was starting from Bucharest to the western part of Romania above Hungary, Austria at a cruse altitude between 6000-8500 m, and after 5 hours reaching Berlin. In total we acquired data during approximately 20 flight hours and we presented the vertical and horizontal LWC variations for different cloud types. The LWC values are similar for each type of cloud to values from literature. The vertical LWC profiles in the atmosphere measured during takeoff and landing of the aircraft have shown their dependence of the meteorological parameters.

  20. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  1. The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects

    NASA Astrophysics Data System (ADS)

    Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir

    2015-12-01

    This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.

  2. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  3. Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2016-06-01

    Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.

  4. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  5. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  6. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  7. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    NASA Astrophysics Data System (ADS)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  8. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    NASA Astrophysics Data System (ADS)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  9. Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.

    2016-06-01

    Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66 cm.

  10. Cloud Height Estimation with a Single Digital Camera and Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Carretas, Filipe; Janeiro, Fernando M.

    2014-05-01

    Clouds influence the local weather, the global climate and are an important parameter in the weather prediction models. Clouds are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Therefore it is important to develop low cost and robust systems that can be easily deployed in the field, enabling large scale acquisition of cloud parameters. Recently, the authors developed a low-cost system for the measurement of cloud base height using stereo-vision and digital photography. However, due to the stereo nature of the system, some challenges were presented. In particular, the relative camera orientation requires calibration and the two cameras need to be synchronized so that the photos from both cameras are acquired simultaneously. In this work we present a new system that estimates the cloud height between 1000 and 5000 meters. This prototype is composed by one digital camera controlled by a Raspberry Pi and is installed at Centro de Geofísica de Évora (CGE) in Évora, Portugal. The camera is periodically triggered to acquire images of the overhead sky and the photos are downloaded to the Raspberry Pi which forwards them to a central computer that processes the images and estimates the cloud height in real time. To estimate the cloud height using just one image requires a computer model that is able to learn from previous experiences and execute pattern recognition. The model proposed in this work is an Artificial Neural Network (ANN) that was previously trained with cloud features at different heights. The chosen Artificial Neural Network is a three-layer network, with six parameters in the input layer, 12 neurons in the hidden intermediate layer, and an output layer with only one output. The six input parameters are the average intensity values and the intensity standard deviation of each RGB channel. The output parameter in the output layer is the cloud height estimated by the ANN. The training procedure was performed, using the back-propagation method, in a set of 260 different clouds with heights in the range [1000, 5000] m. The training of the ANN has resulted in a correlation ratio of 0.74. This trained ANN can therefore be used to estimate the cloud height. The previously described system can also measure the wind speed and direction at cloud height by measuring the displacement, in pixels, of a cloud feature between consecutively acquired photos. Also, the geographical north direction can be estimated using this setup through sequential night images with high exposure times. A further advantage of this single camera system is that no camera calibration or synchronization is needed. This significantly reduces the cost and complexity of field deployment of cloud height measurement systems based on digital photography.

  11. Large-scale urban point cloud labeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  12. Indirect and Semi-Direct Aerosol Campaign: The Impact of Arctic Aerosols on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarquhar, Greg; Ghan, Steven J.; Verlinde, J.

    2011-02-01

    A comprehensive dataset of microphysical and radiative properties of aerosols and clouds in the arctic boundary layer in the vicinity of Barrow, Alaska was collected in April 2008 during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) sponsored by the Department of Energy Atmospheric Radiation Measurement (ARM) and Atmospheric Science Programs. The primary aim of ISDAC was to examine indirect effects of aerosols on clouds that contain both liquid and ice water. The experiment utilized the ARM permanent observational facilities at the North Slope of Alaska (NSA) in Barrow. These include a cloud radar, a polarized micropulse lidar, and an atmosphericmore » emitted radiance interferometer as well as instruments specially deployed for ISDAC measuring aerosol, ice fog, precipitation and spectral shortwave radiation. The National Research Council of Canada Convair-580 flew 27 sorties during ISDAC, collecting data using an unprecedented 42 cloud and aerosol instruments for more than 100 hours on 12 different days. Data were obtained above, below and within single-layer stratus on 8 April and 26 April 2008. These data enable a process-oriented understanding of how aerosols affect the microphysical and radiative properties of arctic clouds influenced by different surface conditions. Observations acquired on a heavily polluted day, 19 April 2008, are enhancing this understanding. Data acquired in cirrus on transit flights between Fairbanks and Barrow are improving our understanding of the performance of cloud probes in ice. Ultimately the ISDAC data will be used to improve the representation of cloud and aerosol processes in models covering a variety of spatial and temporal scales, and to determine the extent to which long-term surface-based measurements can provide retrievals of aerosols, clouds, precipitation and radiative heating in the Arctic.« less

  13. A Cloudy Day on Mars

    NASA Technical Reports Server (NTRS)

    2002-01-01

    (Released 23 April 2002) The Science This image, centered near 49.7 N and 43.0 W (317.0 E), displays splotchy water ice clouds that obscure the surface. Most of Mars was in a relatively clear period when this image was acquired, which is why many of the other THEMIS images acquired during the same period do not have obvious signs of atmospheric dust or water ice clouds. This image is far enough north to catch the edge of the north polar hood that develops during the northern winter. This is a cap of water ice and CO2 ice clouds that form over the Martian north pole. Mars has a number of interesting atmospheric phenomena which THEMIS will be able to view in addition to water ice clouds, including dust devils, dust storms, and tracking atmospheric temperatures with the infrared camera. The Story Anyone who's been on an airplane in a storm knows how clouds on Earth can block the view below. The thin water ice clouds on Mars might make things slightly blurry, but at least we can still see the surface. While the surface features may not be as clear in this image, it's actually kind of fascinating to see clouds at work, because we can get a sense of how the north pole on Mars influences the weather and the climate. In this image, the north pole is responsible for the presence of the clouds. Made of water ice and carbon dioxide, these clouds 'mist out' in a atmospheric 'hood' that caps the surface during the northern Martian winter, hiding it from full view of eager observers here on Earth.

  14. Applications of UAS-SfM for coastal vulnerability assessment: Geomorphic feature extraction and land cover classification from fine-scale elevation and imagery data

    NASA Astrophysics Data System (ADS)

    Sturdivant, E. J.; Lentz, E. E.; Thieler, E. R.; Remsen, D.; Miner, S.

    2016-12-01

    Characterizing the vulnerability of coastal systems to storm events, chronic change and sea-level rise can be improved with high-resolution data that capture timely snapshots of biogeomorphology. Imagery acquired with unmanned aerial systems (UAS) coupled with structure from motion (SfM) photogrammetry can produce high-resolution topographic and visual reflectance datasets that rival or exceed lidar and orthoimagery. Here we compare SfM-derived data to lidar and visual imagery for their utility in a) geomorphic feature extraction and b) land cover classification for coastal habitat assessment. At a beach and wetland site on Cape Cod, Massachusetts, we used UAS to capture photographs over a 15-hectare coastal area with a resulting pixel resolution of 2.5 cm. We used standard SfM processing in Agisoft PhotoScan to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM). The SfM-derived products have a horizontal uncertainty of +/- 2.8 cm. Using the point cloud in an extraction routine developed for lidar data, we determined the position of shorelines, dune crests, and dune toes. We used the output imagery and DEM to map land cover with a pixel-based supervised classification. The dense and highly precise SfM point cloud enabled extraction of geomorphic features with greater detail than with lidar. The feature positions are reported with near-continuous coverage and sub-meter accuracy. The orthomosaic image produced with SfM provides visual reflectance with higher resolution than those available from aerial flight surveys, which enables visual identification of small features and thus aids the training and validation of the automated classification. We find that the high-resolution and correspondingly high density of UAS data requires some simple modifications to existing measurement techniques and processing workflows, and that the types of data and the quality provided is equivalent to, and in some cases surpasses, that of data collected using other methods.

  15. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  16. Analysis and modeling of summertime convective cloud and precipitation structure over the Southeastern United States

    NASA Technical Reports Server (NTRS)

    Knupp, Kevin R.

    1991-01-01

    A summary of an investigation of deep convective cloud systems that typify the summertime subtropical environment of northern Alabama is presented. The major portion of the research effort included analysis of data acquired during the 1986 Cooperative Huntsville Meteorological Experiment (COHMEX), which consisted of the joint programs Satellite Precipitation and Cloud Experiment (SPACE) under NASA direction, the Microburst and Service Thunderstorm (MIST) Program under NSF sponsorship, and the FAA-Lincoln Laboratory Weather Study (FLOWS). This work relates closely to the SPACE component of COHMEX, one of the general goals of which was to further the understanding of kinematic and precipitation structure of convective cloud systems. The special observational plateforms that were available under the SPACE/COHMEX Program are shown. The original objectives included studies of both isolated deep convection and of (small) mesoscale convection systems that are observed in the Southeast environment. In addition, it was proposed to include both observational and comparative numerical modeling studies of these characteristic cloud systems. Changes in scope were made during the course of this investigation to better accommodate both the manpower available and the data that was acquired. A greater emphasis was placed on determination of the internal structure of small mesoscale convective systems, and the relationship of internal dynamical and microphysical processes to the observed cloud top behavior as inferred from GOES IR (30 min) data. The major accomplishments of this investigation are presented.

  17. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.

  18. Continuum Limit of Total Variation on Point Clouds

    NASA Astrophysics Data System (ADS)

    García Trillos, Nicolás; Slepčev, Dejan

    2016-04-01

    We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.

  19. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  20. On the performance of metrics to predict quality in point cloud representations

    NASA Astrophysics Data System (ADS)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  1. Semantic Segmentation of Building Elements Using Point Cloud Hashing

    NASA Astrophysics Data System (ADS)

    Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.

    2018-05-01

    For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).

  2. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  3. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  4. Feature-based three-dimensional registration for repetitive geometry in machine vision

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2016-01-01

    As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703

  5. Advanced Optical Diagnostics for Ice Crystal Cloud Measurements in the NASA Glenn Propulsion Systems Laboratory

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J.; Fagan, Amy; Van Zante, Judith F.; Kirkegaard, Jonathan P.; Rohler, David P.; Maniyedath, Arjun; Izen, Steven H.

    2013-01-01

    A light extinction tomography technique has been developed to monitor ice water clouds upstream of a direct connected engine in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center (GRC). The system consists of 60 laser diodes with sheet generating optics and 120 detectors mounted around a 36-inch diameter ring. The sources are pulsed sequentially while the detectors acquire line-of-sight extinction data for each laser pulse. Using computed tomography algorithms, the extinction data are analyzed to produce a plot of the relative water content in the measurement plane. To target the low-spatial-frequency nature of ice water clouds, unique tomography algorithms were developed using filtered back-projection methods and direct inversion methods that use Gaussian basis functions. With the availability of a priori knowledge of the mean droplet size and the total water content at some point in the measurement plane, the tomography system can provide near real-time in-situ quantitative full-field total water content data at a measurement plane approximately 5 feet upstream of the engine inlet. Results from ice crystal clouds in the PSL are presented. In addition to the optical tomography technique, laser sheet imaging has also been applied in the PSL to provide planar ice cloud uniformity and relative water content data during facility calibration before the tomography system was available and also as validation data for the tomography system. A comparison between the laser sheet system and light extinction tomography resulting data are also presented. Very good agreement of imaged intensity and water content is demonstrated for both techniques. Also, comparative studies between the two techniques show excellent agreement in calculation of bulk total water content averaged over the center of the pipe.

  6. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  7. Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Javernick, L.; Brasington, J.; Caruso, B.

    2014-05-01

    Recent advances in computer vision and image analysis have led to the development of a novel, fully automated photogrammetric method to generate dense 3d point cloud data. This approach, termed Structure-from-Motion or SfM, requires only limited ground-control and is ideally suited to imagery obtained from low-cost, non-metric cameras acquired either at close-range or using aerial platforms. Terrain models generated using SfM have begun to emerge recently and with a growing spectrum of software now available, there is an urgent need to provide a robust quality assessment of the data products generated using standard field and computational workflows. To address this demand, we present a detailed error analysis of sub-meter resolution terrain models of two contiguous reaches (1.6 and 1.7 km long) of the braided Ahuriri River, New Zealand, generated using SfM. A six stage methodology is described, involving: i) hand-held image acquisition from an aerial platform, ii) 3d point cloud extraction modeling using Agisoft PhotoScan, iii) georeferencing on a redundant network of GPS-surveyed ground-control points, iv) point cloud filtering to reduce computational demand as well as reduce vegetation noise, v) optical bathymetric modeling of inundated areas; and vi) data fusion and surface modeling to generate sub-meter raster terrain models. Bootstrapped geo-registration as well as extensive distributed GPS and sonar-based bathymetric check-data were used to quantify the quality of the models generated after each processing step. The results obtained provide the first quantified analysis of SfM applied to model the complex terrain of a braided river. Results indicate that geo-registration errors of 0.04 m (planar) and 0.10 m (elevation) and vertical surface errors of 0.10 m in non-vegetation areas can be achieved from a dataset of photographs taken at 600 m and 800 m above the ground level. These encouraging results suggest that this low-cost, logistically simple method can deliver high quality terrain datasets competitive with those obtained with significantly more expensive laser scanning, and suitable for geomorphic change detection and hydrodynamic modeling.

  8. Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.

    2017-09-01

    Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  9. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  10. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    NASA Astrophysics Data System (ADS)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  11. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  12. Delineating Beach and Dune Morphology from Massive Terrestrial Laser Scanning Data Using the Generic Mapping Tools

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Wang, G.; Yan, B.; Kearns, T.

    2016-12-01

    Terrestrial laser scanning (TLS) techniques have been proven to be efficient tools to collect three-dimensional high-density and high-accuracy point clouds for coastal research and resource management. However, the processing and presenting of massive TLS data is always a challenge for research when targeting a large area with high-resolution. This article introduces a workflow using shell-scripting techniques to chain together tools from the Generic Mapping Tools (GMT), Geographic Resources Analysis Support System (GRASS), and other command-based open-source utilities for automating TLS data processing. TLS point clouds acquired in the beach and dune area near Freeport, Texas in May 2015 were used for the case study. Shell scripts for rotating the coordinate system, removing anomalous points, assessing data quality, generating high-accuracy bare-earth DEMs, and quantifying beach and sand dune features (shoreline, cross-dune section, dune ridge, toe, and volume) are presented in this article. According to this investigation, the accuracy of the laser measurements (distance from the scanner to the targets) is within a couple of centimeters. However, the positional accuracy of TLS points with respect to a global coordinate system is about 5 cm, which is dominated by the accuracy of GPS solutions for obtaining the positions of the scanner and reflector. The accuracy of TLS-derived bare-earth DEM is primarily determined by the size of grid cells and roughness of the terrain surface for the case study. A DEM with grid cells of 4m x 1m (shoreline by cross-shore) provides a suitable spatial resolution and accuracy for deriving major beach and dune features.

  13. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  14. Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data

    NASA Astrophysics Data System (ADS)

    Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.

    2016-06-01

    Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.

  15. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  16. An Integrated Photogrammetric and Photoclinometric Approach for Pixel-Resolution 3d Modelling of Lunar Surface

    NASA Astrophysics Data System (ADS)

    Liu, W. C.; Wu, B.

    2018-04-01

    High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.

  17. Coupled retrieval of water cloud and above-cloud aerosol properties using the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI)

    NASA Astrophysics Data System (ADS)

    Xu, F.; van Harten, G.; Diner, D. J.; Rheingans, B. E.; Tosca, M.; Seidel, F. C.; Bull, M. A.; Tkatcheva, I. N.; McDuffie, J. L.; Garay, M. J.; Davis, A. B.; Jovanovic, V. M.; Brian, C.; Alexandrov, M. D.; Hostetler, C. A.; Ferrare, R. A.; Burton, S. P.

    2017-12-01

    The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. AirMSPI acquires radiance and polarization data in bands centered at 355, 380, 445, 470*, 555, 660*, 865*, and 935 nm (*denotes polarimetric bands). In sweep mode, georectified images cover an area of 80-100 km (along track) by 10-25 km (across track) between ±66° off nadir, with a map-projected spatial resolution of 25 meters. An efficient and flexible retrieval algorithm has been developed using AirMSPI polarimetric bands for simultaneous retrieval of cloud and above-cloud aerosol microphysical properties. We design a three-step retrieval approach, namely 1) estimating effective droplet size distribution using polarimetric cloudbow observations and using it as initial guess for Step 2; 2) combining water cloud and aerosol above cloud retrieval by fitting polarimetric signals at all scattering angles (e.g. from 80° to 180°); and 3) constructing a lookup table of radiance for a set of cloud optical depth grids using aerosol and cloud information retrieved from Step 2 and then estimating pixel-scale cloud optical depth based on 1D radiative transfer (RT) theory by fitting the AirMSPI radiance. Retrieval uncertainty is formulated by accounting for instrumental errors and constraints imposed on spectral variations of aerosol and cloud droplet optical properties. As the forward RT model, a hybrid approach is developed to combine the computational strengths of Markov-chain and adding-doubling methods to model polarized RT in a coupled aerosol, Rayleigh and cloud system. Our retrieval approach is tested using 134 AirMSPI datasets acquired during NASA ORACLES field campaign in 09/2016, with low to high aerosol loadings. For validation, the retrieved aerosol optical depths and cloud-top heights are compared to coincident High Spectral Resolution Lidar-2 (HSRL-2) data, and the droplet size parameters including effective radius and effective variance and cloud optical thickness are compared to coincident Research Scanning Polarimeter (RSP) data.

  18. Infrared Extinction and the Initial Conditions For Star and Planet Formation

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2003-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular clouds and investigate the physical conditions which give rise to star and planet formation. The goals of the this program are to: 1) acquire deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds, 2) reduce and analyze the data obtained in order to produce detailed extinction maps of the clouds, 3) prepare results, where appropriate, for publication.

  19. Infrared Extinction and the Initial Conditions for Star and Planet Formation

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2002-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular clouds and investigate the physical conditions which give rise to star and planet formation. The goals of the this program are to: (1) acquire deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds; (2) reduce and analyze the data obtained in order to produce detailed extinction maps of the clouds; and (3) prepare results, where appropriate, for publication.

  20. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    DTIC Science & Technology

    2017-04-01

    ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms

  1. Inventory of File WAFS_blended_2014102006f06.grib2

    Science.gov Websites

    ) [%] 004 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 005 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial max,code table 4.15=3,#points=1 006 600 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 007 600 mb CTP 6 hour fcst In

  2. Cloud Imagers Offer New Details on Earth's Health

    NASA Technical Reports Server (NTRS)

    2009-01-01

    A stunning red sunset or purple sunrise is an aesthetic treat with a scientific explanation: The colors are a direct result of the absorption or reflectance of solar radiation by atmospheric aerosols, minute particles (either solid or liquid) in the Earth s atmosphere that occur both naturally and because of human activity. At the beginning or end of the day, the Sun s rays travel farther through the atmosphere to reach an observer s eyes and more green and yellow light is scattered, making the Sun appear red. Sunset and sunrise are especially colorful when the concentration of atmospheric particles is high. This ability of aerosols to absorb and reflect sunlight is not just pretty; it also determines the amount of radiation and heat that reaches the Earth s surface, and can profoundly affect climate. In the atmosphere, aerosols are also important as nuclei for the condensation of water droplets and ice crystals. Clouds with fewer aerosols cannot form as many water droplets (called cloud particles), and consequently, do not scatter light well. In this case, more sunlight reaches the Earth s surface. When aerosol levels in clouds are high, however, more nucleation points can form small liquid water droplets. These smaller cloud particles can reflect up to 90 percent of visible radiation to space, keeping the heat from ever reaching Earth s surface. The tendency for these particles to absorb or reflect the Sun s energy - called extinction by astronomers - depends on a number of factors, including chemical composition and the humidity and temperature in the surrounding air; because cloud particles are so small, they are affected quickly by minute changes in the atmosphere. Because of this sensitivity, atmospheric scientists study cloud particles to anticipate patterns and shifts in climate. Until recently, NASA s study of atmospheric aerosols and cloud particles has been focused primarily on satellite images, which, while granting large-scale atmospheric analysis, limited scientists ability to acquire detailed information about individual particles. Now, experiments with specialized equipment can be flown on standard jets, making it possible for researchers to monitor and more accurately anticipate changes in Earth s atmosphere and weather patterns.

  3. Observations of the boundary layer, cloud, and aerosol variability in the southeast Pacific near-coastal marine stratocumulus during VOCALS-REx

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.

    2011-09-01

    Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud- Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. On days without predominately synoptic and meso-scale influences, the BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL. Entrainment rates calculated from the near cloud-top fluxes and turbulence in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. The BL had a depth of 1140 ± 120 m, was generally well-mixed and capped by a sharp inversion without predominately synoptic and meso-scale influences. The wind direction generally switched from southerly within the BL to northerly above the inversion. On days when a synoptic system and related mesoscale costal circulations affected conditions at Point Alpha (29 October-4 November), a moist layer above the inversion moved over Point Alpha, and the total-water mixing ratio above the inversion was larger than that within the BL. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft in situ observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.

  4. Assessment of a Static Multibeam Sonar Scanner for 3d Surveying in Confined Subaquatic Environments

    NASA Astrophysics Data System (ADS)

    Moisan, E.; Charbonnier, P.; Foucher, P.; Grussenmeyer, P.; Guillemin, S.; Samat, O.; Pagès, C.

    2016-06-01

    Mechanical Scanning Sonar (MSS) is a promising technology for surveying underwater environments. Such devices are comprised of a multibeam echosounder attached to a pan & tilt positioner, that allows sweeping the scene in a similar way as Terrestrial Laser Scanners (TLS). In this paper, we report on the experimental assessment of a recent MSS, namely, the BlueView BV5000, in a confined environment: lock number 50 on the Marne-Rhin canal (France). To this aim, we hung the system upside-down to scan the lock chamber from the surface, which allows surveying the scanning positions, up to an horizontal orientation. We propose a geometric method to estimate the remaining angle and register the scans in a coordinate system attached to the site. After reviewing the different errors that impair sonar data, we compare the resulting point cloud to a TLS model that was acquired the day before, while the lock was completely empty for maintenance. While the results exhibit a bias that can be partly explained by an imperfect setup, the maximum difference is less than 15 cm, and the standard deviation is about 3.5 cm. Visual inspection shows that coarse defects of the masonry, such as stone lacks or cavities, can be detected in the MSS point cloud, while smaller details, e.g. damaged joints, are harder to notice.

  5. 3D matching techniques using OCT fingerprint point clouds

    NASA Astrophysics Data System (ADS)

    Gutierrez da Costa, Henrique S.; Silva, Luciano; Bellon, Olga R. P.; Bowden, Audrey K.; Czovny, Raphael K.

    2017-02-01

    Optical Coherence Tomography (OCT) makes viable acquisition of 3D fingerprints from both dermis and epidermis skin layers and their interfaces, exposing features that can be explored to improve biometric identification such as the curvatures and distinctive 3D regions. Scanned images from eleven volunteers allowed the construction of the first OCT 3D fingerprint database, to our knowledge, containing epidermal and dermal fingerprints. 3D dermal fingerprints can be used to overcome cases of Failure to Enroll (FTE) due to poor ridge image quality and skin alterations, cases that affect 2D matching performance. We evaluate three matching techniques, including the well-established Iterative Closest Points algorithm (ICP), Surface Interpenetration Measure (SIM) and the well-known KH Curvature Maps, all assessed using a 3D OCT fingerprint database, the first one for this purpose. Two of these techniques are based on registration techniques and one on curvatures. These were evaluated, compared and the fusion of matching scores assessed. We applied a sequence of steps to extract regions of interest named (ROI) minutiae clouds, representing small regions around distinctive minutia, usually located at ridges/valleys endings or bifurcations. The obtained ROI is acquired from the epidermis and dermis-epidermis interface by OCT imaging. A comparative analysis of identification accuracy was explored using different scenarios and the obtained results shows improvements for biometric identification. A comparison against 2D fingerprint matching algorithms is also presented to assess the improvements.

  6. Earth Limb taken by the Expedition 17 Crew

    NASA Image and Video Library

    2008-07-22

    ISS017-E-011603 (22 July 2008) --- Layers of Earth's atmosphere, brightly colored as the sun rises over central Asia, and Polar Mesospheric Clouds (also known as noctilucent clouds) are featured in this image photographed by an Expedition 17 crewmember on the International Space Station. The image was acquired in support of International Polar Year research.

  7. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.

  8. Scientific Infrastructure To Support Manned And Unmanned Aircraft, Tethered Balloons, And Related Aerial Activities At Doe Arm Facilities On The North Slope Of Alaska

    NASA Astrophysics Data System (ADS)

    Ivey, M.; Dexheimer, D.; Hardesty, J.; Lucero, D. A.; Helsel, F.

    2015-12-01

    The U.S. Department of Energy (DOE), through its scientific user facility, the Atmospheric Radiation Measurement (ARM) facilities, provides scientific infrastructure and data to the international Arctic research community via its research sites located on the North Slope of Alaska. DOE has recently invested in improvements to facilities and infrastructure to support operations of unmanned aerial systems for science missions in the Arctic and North Slope of Alaska. A new ground facility, the Third ARM Mobile Facility, was installed at Oliktok Point Alaska in 2013. Tethered instrumented balloons were used to make measurements of clouds in the boundary layer including mixed-phase clouds. A new Special Use Airspace was granted to DOE in 2015 to support science missions in international airspace in the Arctic. Warning Area W-220 is managed by Sandia National Laboratories for DOE Office of Science/BER. W-220 was successfully used for the first time in July 2015 in conjunction with Restricted Area R-2204 and a connecting Altitude Reservation Corridor (ALTRV) to permit unmanned aircraft to operate north of Oliktok Point. Small unmanned aircraft (DataHawks) and tethered balloons were flown at Oliktok during the summer and fall of 2015. This poster will discuss how principal investigators may apply for use of these Special Use Airspaces, acquire data from the Third ARM Mobile Facility, or bring their own instrumentation for deployment at Oliktok Point, Alaska. The printed poster will include the standard DOE funding statement.

  9. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    PubMed Central

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  10. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  11. SatCam: A mobile application for coordinated ground/satellite observation of clouds and validation of satellite-derived cloud mask products.

    NASA Astrophysics Data System (ADS)

    Gumley, L.; Parker, D.; Flynn, B.; Holz, R.; Marais, W.

    2011-12-01

    SatCam is an application for iOS devices that allows users to collect observations of local cloud and surface conditions in coordination with an overpass of the Terra, Aqua, or NPP satellites. SatCam allows users to acquire images of sky conditions and ground conditions at their location anywhere in the world using the built-in iPhone or iPod Touch camera at the same time that the satellite is passing overhead and viewing their location. Immediately after the sky and ground observations are acquired, the application asks the user to rate the level of cloudiness in the sky (Completely Clear, Mostly Clear, Partly Cloudy, Overcast). For the ground observation, the user selects their assessment of the surface conditions (Urban, Green Vegetation, Brown Vegetation, Desert, Snow, Water). The sky condition and surface condition selections are stored along with the date, time, and geographic location for the images, and the images are uploaded to a central server. When the MODIS (Terra and Aqua) or VIIRS (NPP) imagery acquired over the user location becomes available, a MODIS or VIIRS true color image centered at the user's location is delivered back to the SatCam application on the user's iOS device. SSEC also proposes to develop a community driven SatCam website where users can share their observations and assessments of satellite cloud products in a collaborative environment. SSEC is developing a server side data analysis system to ingest the SatCam user observations, apply quality control, analyze the sky images for cloud cover, and collocate the observations with MODIS and VIIRS satellite products (e.g., cloud mask). For each observation that is collocated with a satellite observation, the server will determine whether the user scored a "hit", meaning their sky observation and sky assessment matched the automated cloud mask obtained from the satellite observation. The hit rate will be an objective assessment of the accuracy of the user's sky observations. Users with high hit rates will be identified automatically and their observations will be used globally to evaluate the performance of the MODIS cloud mask algorithm for Terra and Aqua and the VIIRS cloud mask algorithm for NPP. The user's assessment of the ground conditions will also be used to evaluate the cloud mask accuracy in selecting the correct surface type at the user's location, which is an important element in the decision path used internally by the cloud mask algorithm. This presentation will describe the SatCam application, how it is used, and show examples of SatCam observations.

  12. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  13. Cloud-point detection using a portable thickness shear mode crystal resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansure, A.J.; Spates, J.J.; Germer, J.W.

    1997-08-01

    The Thickness Shear Mode (TSM) crystal resonator monitors the crude oil by propagating a shear wave into the oil. The coupling of the shear wave and the crystal vibrations is a function of the viscosity of the oil. By driving the crystal with circuitry that incorporates feedback, it is possible to determine the change from Newtonian to non-Newtonian viscosity at the cloud point. A portable prototype TSM Cloud Point Detector (CPD) has performed flawlessly during field and lab tests proving the technique is less subjective or operator dependent than the ASTM standard. The TSM CPD, in contrast to standard viscositymore » techniques, makes the measurement in a closed container capable of maintaining up to 100 psi. The closed container minimizes losses of low molecular weight volatiles, allowing samples (25 ml) to be retested with the addition of chemicals. By cycling/thermal soaking the sample, the effects of thermal history can be investigated and eliminated as a source of confusion. The CPD is portable, suitable for shipping the field offices for use by personnel without special training or experience in cloud point measurements. As such, it can make cloud point data available without the delays and inconvenience of sending samples to special labs. The crystal resonator technology can be adapted to in-line monitoring of cloud point and deposition detection.« less

  14. New NASA Images of Irma's Towering Clouds

    NASA Image and Video Library

    2017-09-08

    On Sept. 7, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite passed over Hurricane Irma at approximately 11:20 a.m. local time. The MISR instrument comprises nine cameras that view the Earth at different angles, and since it takes roughly seven minutes for all nine cameras to capture the same location, the motion of the clouds between images allows scientists to calculate the wind speed at the cloud tops. The animated GIF shows Irma's motion over the seven minutes of the MISR imagery. North is toward the top of the image. This composite image shows Hurricane Irma as viewed by the central, downward-looking camera (left), as well as the wind speeds (right) superimposed on the image. The length of the arrows is proportional to the wind speed, while their color shows the altitude at which the winds were calculated. At the time the image was acquired, Irma's eye was located approximately 60 miles (100 kilometers) north of the Dominican Republic and 140 miles (230 kilometers) north of its capital, Santo Domingo. Irma was a powerful Category 5 hurricane, with wind speeds at the ocean surface up to 185 miles (300 kilometers) per hour, according to the National Oceanic and Atmospheric Administration. The MISR data show that at cloud top, winds near the eye wall (the most destructive part of the storm) were approximately 90 miles per hour (145 kilometers per hour), and the maximum cloud-top wind speed throughout the storm calculated by MISR was 135 miles per hour (220 kilometers per hour). While the hurricane's dominant rotation direction is counter-clockwise, winds near the eye wall are consistently pointing outward from it. This is an indication of outflow, the process by which a hurricane draws in warm, moist air at the surface and ejects cool, dry air at its cloud tops. These data were captured during Terra orbit 94267. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21946

  15. Geomorphological activity at a rock glacier front detected with a 3D density-based clustering algorithm

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2017-02-01

    Acquisition of high density point clouds using terrestrial laser scanners (TLSs) has become commonplace in geomorphic science. The derived point clouds are often interpolated onto regular grids and the grids compared to detect change (i.e. erosion and deposition/advancement movements). This procedure is necessary for some applications (e.g. digital terrain analysis), but it inevitably leads to a certain loss of potentially valuable information contained within the point clouds. In the present study, an alternative methodology for geomorphological analysis and feature detection from point clouds is proposed. It rests on the use of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), applied to TLS data for a rock glacier front slope in the Swiss Alps. The proposed methods allowed the detection and isolation of movements directly from point clouds which yield to accuracies in the following computation of volumes that depend only on the actual registered distance between points. We demonstrated that these values are more conservative than volumes computed with the traditional DEM comparison. The results are illustrated for the summer of 2015, a season of enhanced geomorphic activity associated with exceptionally high temperatures.

  16. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  17. Automated detection of cloud and cloud-shadow in single-date Landsat imagery using neural networks and spatial post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael J.; Hayes, Daniel J

    2014-01-01

    Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less

  18. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  19. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Shawn

    This code consists of Matlab routines which enable the user to perform non-manifold surface reconstruction via triangulation from high dimensional point cloud data. The code was based on an algorithm originally developed in [Freedman (2007), An Incremental Algorithm for Reconstruction of Surfaces of Arbitrary Codimension Computational Geometry: Theory and Applications, 36(2):106-116]. This algorithm has been modified to accommodate non-manifold surface according to the work described in [S. Martin and J.-P. Watson (2009), Non-Manifold Surface Reconstruction from High Dimensional Point Cloud DataSAND #5272610].The motivation for developing the code was a point cloud describing the molecular conformation space of cyclooctane (C8H16). Cyclooctanemore » conformation space was represented using points in 72 dimensions (3 coordinates for each molecule). The code was used to triangulate the point cloud and thereby study the geometry and topology of cyclooctane. Futures applications are envisioned for peptides and proteins.« less

  1. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    NASA Astrophysics Data System (ADS)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  2. Microsecond-scale electric field pulses in cloud lightning discharges

    NASA Technical Reports Server (NTRS)

    Villanueva, Y.; Rakov, V. A.; Uman, M. A.; Brook, M.

    1994-01-01

    From wideband electric field records acquired using a 12-bit digitizing system with a 500-ns sampling interval, microsecond-scale pulses in different stages of cloud flashes in Florida and New Mexico are analyzed. Pulse occurrence statistics and waveshape characteristics are presented. The larger pulses tend to occur early in the flash, confirming the results of Bils et al. (1988) and in contrast with the three-stage representation of cloud-discharge electric fields suggested by Kitagawa and Brook (1960). Possible explanations for the discrepancy are discussed. The tendency for the larger pulses to occur early in the cloud flash suggests that they are related to the initial in-cloud channel formation processes and contradicts the common view found in the atmospheric radio-noise literature that the main sources of VLF/LF electromagnetic radiation in cloud flashes are the K processes which occur in the final, or J type, part of the cloud discharge.

  3. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  4. Deep Clouds

    NASA Image and Video Library

    2008-05-27

    Bright puffs and ribbons of cloud drift lazily through Saturn's murky skies. In contrast to the bold red, orange and white clouds of Jupiter, Saturn's clouds are overlain by a thick layer of haze. The visible cloud tops on Saturn are deeper in its atmosphere due to the planet's cooler temperatures. This view looks toward the unilluminated side of the rings from about 18 degrees above the ringplane. Images taken using red, green and blue spectral filters were combined to create this natural color view. The images were acquired with the Cassini spacecraft wide-angle camera on April 15, 2008 at a distance of approximately 1.5 million kilometers (906,000 miles) from Saturn. Image scale is 84 kilometers (52 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA09910

  5. LANDSAT world standard catalog, LANDSAT-3. [LANDSAT 3 imagery for October 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Imagery acquired by LANDSAT 3 which was processed and input to the data files during the referenced month is listed. Data, such as data acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  6. Multi-temporal Terrestrial Laser Scanner monitoring of coastal instability processes at Coroglio cliff

    NASA Astrophysics Data System (ADS)

    Caputo, Teresa; Somma, Renato; Marino, Ermanno; Matano, Fabio; Troise, Claudia; De Natale, Giuseppe

    2016-04-01

    The Coroglio cliff is a morphological evolution of the caldera rim of Neapolitan Yellow Tuff (NYT) in Campi Flegrei caldera (CFc) with an elevation of 150 m a.s.l. and a length of about 200 m. The lithology consists of NYT, extremely lithified, overlaid by less lithified recent products of the Phlegrean volcanism., These materials are highly erodible and, due to proximity to the sea, the sea wave and wind actions cause very strong erosion process. In the recent years Terrestrial Laser Scanner (TLS) technique is used for environmental monitoring purposes through the creation of high resolution Digital Surface Model (DSM) and Digital Terrain Model (DTM). This method allows the reconstruction, by means of a dense cloud of points, of a 3D model for the entire investigated area. The scans need to be performed from different points of view in order to ensure a good coverage of the area, because a widespread problem is the occurrence of shaded areas. In our study we used a long-range laser scanner model RIEGL VZ1000®. Numerous surveys (April 2013, June 2014, February 2015) have been performed for monitoring coastal cliff morphological evolution. An additional survey was executed in March 2015, shortly after a landslide occurrence. To validate the multi-temporal monitoring of the laser scanner, a "quick" comparison of the acquired point clouds has been carried out using an algorithm cloud-to-cloud, in order to identify 3D changes. Then 2.5D raster images of the different scans has been performed in GIS environment, also in order to allow a map overlay of the produced thematic layer, both raster and vector data (geology, contour map, orthophoto, and so on). The comparison of multi-temporal data have evidenced interesting geomorphological processes on the cliff. It was observed a very intense (about 6 m) local moving back at the base of the cliff, mainly due to the sea wave action during storms, while in cliff sectors characterized by less compact lithologies widespread small (> 50 cm wide) erosion forms have been recognized. Finally, the upper part of the cliff, characterized by loose pyroclastic deposits covered by vegetation, resulted affected by small collapses, which can involve the public gardens of Virgiliano Park, located at the top of the cliff.

  7. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  8. Catalog of earth photographs from the Apollo-Soyuz test project. [listing cloud photographs and data acquired at time photograph was taken

    NASA Technical Reports Server (NTRS)

    El-Baz, F. (Editor)

    1979-01-01

    Information is given on earth photographs obtained by the Apollo astronauts during the Apollo Soyuz Test Project. The data are arranged in three sections. A map index shows the boundaries of each photograph and is used for a quick survey of the coverage for a given geographical area. A tabular index provides the following data: list of photographs by serial number, description of geographic location, latitude and longitude of the center point of the photograph, date when photograph was taken, ground elapsed time, revolution number of Apollo spacecraft, approximate spacecraft altitude, tilt, sun angle, camera, and lens. The photographic index provides same size black and white prints made from the original color negatives.

  9. Effect of electromagnetic field on Kordylewski clouds formation

    NASA Astrophysics Data System (ADS)

    Salnikova, Tatiana; Stepanov, Sergey

    2018-05-01

    In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.

  10. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    NASA Astrophysics Data System (ADS)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.

  11. Observations of the boundary layer, cloud, and aerosol variability in the southeast Pacific coastal marine stratocumulus during VOCALS-REx

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.

    2011-05-01

    Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud-Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. The BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL on days without predominately synoptic and meso-scale influences. The BL had a depth of 1140 ± 120 m, was well-mixed and capped by a sharp inversion. The wind direction generally switched from southerly within the BL to northerly above the inversion. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. From 29 October to 4 November, when a synoptic system affected conditions at Point Alpha, the cloud LWP was higher than on the other days by around 40 g m-2. On 1 and 2 November, a moist layer above the inversion moved over Point Alpha. The total-water specific humidity above the inversion was larger than that within the BL during these days. Entrainment rates (average of 1.5 ± 0.6 mm s-1) calculated from the near cloud-top fluxes and turbulence (vertical velocity variance) in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3, which was consistent with the satellite-derived values. The relationship of cloud droplet number concentration and CCN at 0.2 % supersaturation from 18 flights is Nd =4.6 × CCN0.71. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft {in situ} observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.

  12. Study of cloud properties using airborne and satellite measurements

    NASA Astrophysics Data System (ADS)

    Boscornea, Andreea; Stefan, Sabina; Vajaiac, Sorin Nicolae

    2014-08-01

    The present study investigates cloud microphysics properties using aircraft and satellite measurements. Cloud properties were drawn from data acquired both from in situ measurements with state of the art airborne instrumentation and from satellite products of the MODIS06 System. The used aircraft was ATMOSLAB - Airborne Laboratory for Environmental Atmospheric Research, property of the National Institute for Aerospace Research "Elie Carafoli" (INCAS), Bucharest, Romania, which is specially equipped for this kind of research. The main tool of the airborne laboratory is a Cloud, Aerosol and Precipitation Spectrometer - CAPS (30 bins, 0.51- 50 μm). The data was recorded during two flights during the winter 2013-2014, over a flat region in the south-eastern part of Romania (between Bucharest and Constanta). The analysis of cloud particle size variations and cloud liquid water content provided by CAPS can explain cloud processes, and can also indicate the extent of aerosols effects on clouds. The results, such as cloud coverage and/or cloud types, microphysical parameters of aerosols on the one side and the cloud microphysics parameters obtained from aircraft flights on the other side, was used to illustrate the importance of microphysics cloud properties for including the radiative effects of clouds in the regional climate models.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.

    Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less

  14. Near-Field Deformation Associated with the South Napa Earthquake (M 6.0) Using Differential Airborne LiDAR

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Glennie, C. L.; Brooks, B. A.; Hauser, D. L.; Ericksen, T.; Boatwright, J.; Rosinski, A.; Dawson, T. E.; Mccrink, T. P.; Mardock, D. K.; Hoirup, D. F., Jr.; Bray, J.

    2014-12-01

    Pre-earthquake airborne LiDAR coverage exists for the area impacted by the M 6.0 South Napa earthquake. The Napa watershed data set was acquired in 2003, and data sets were acquired in other portions of the impacted area in 2007, 2010 and 2014. The pre-earthquake data are being assessed and are of variable quality and point density. Following the earthquake, a coalition was formed to enable rapid acquisition of post-earthquake LiDAR. Coordination of this coalition took place through the California Earthquake Clearinghouse; consequently, a commercial contract was organized by Department of Water Resources that allowed for the main fault rupture and damaged Browns Valley area to be covered 16 days after the earthquake at a density of 20 points per square meter over a 20 square kilometer area. Along with the airborne LiDAR, aerial imagery was acquired and will be processed to form an orthomosaic using the LiDAR-derived DEM. The 'Phase I' airborne data were acquired using an Optech Orion M300 scanner, an Applanix 200 GPS-IMU, and a DiMac ultralight medium format camera by Towill. These new data, once delivered, will be differenced against the pre-earthquake data sets using a newly developed algorithm for point cloud matching, which is improved over prior methods by accounting for scan geometry error sources. Proposed additional 'Phase II' coverage would allow repeat-pass, post-earthquake coverage of the same area of interest as in Phase I, as well as an addition of up to 4,150 square kilometers that would potentially allow for differential LiDAR assessment of levee and bridge impacts at a greater distance from the earthquake source. Levee damage was reported up to 30 km away from the epicenter, and proposed LiDAR coverage would extend up to 50 km away and cover important critical lifeline infrastructure in the western Sacramento River delta, as well as providing full post-earthquake repeat-pass coverage of the Napa watershed to study transient deformation.

  15. Improving volcanic sulfur dioxide cloud dispersal forecasts by progressive assimilation of satellite observations

    NASA Astrophysics Data System (ADS)

    Boichu, Marie; Clarisse, Lieven; Khvorostyanov, Dmitry; Clerbaux, Cathy

    2014-04-01

    Forecasting the dispersal of volcanic clouds during an eruption is of primary importance, especially for ensuring aviation safety. As volcanic emissions are characterized by rapid variations of emission rate and height, the (generally) high level of uncertainty in the emission parameters represents a critical issue that limits the robustness of volcanic cloud dispersal forecasts. An inverse modeling scheme, combining satellite observations of the volcanic cloud with a regional chemistry-transport model, allows reconstructing this source term at high temporal resolution. We demonstrate here how a progressive assimilation of freshly acquired satellite observations, via such an inverse modeling procedure, allows for delivering robust sulfur dioxide (SO2) cloud dispersal forecasts during the eruption. This approach provides a computationally cheap estimate of the expected location and mass loading of volcanic clouds, including the identification of SO2-rich parts.

  16. Measuring cloud thermodynamic phase with shortwave infrared imaging spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David R.; McCubbin, Ian; Gao, Bo Cai

    Shortwave Infrared imaging spectroscopy enables accurate remote mapping of cloud thermodynamic phase at high spatial resolution. We describe a measurement strategy to exploit signatures of liquid and ice absorption in cloud top apparent reflectance spectra from 1.4 to 1.8 μm. This signal is generally insensitive to confounding factors such as solar angles, view angles, and surface albedo. We first evaluate the approach in simulation and then apply it to airborne data acquired in the Calwater-2/ACAPEX campaign of Winter 2015. Here NASA’s “Classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) remotely observed diverse cloud formations while the U.S. Department of Energy ARMmore » Aerial Facility G-1 aircraft measured cloud integral and microphysical properties in situ. Finally, the coincident measurements demonstrate good separation of the thermodynamic phases for relatively homogeneous clouds.« less

  17. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  18. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  19. Study of the thermodynamic phase of hydrometeors in convective clouds in the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Ferreira, W. C.; Correia, A. L.; Martins, J.

    2012-12-01

    Aerosol-cloud interactions are responsible for large uncertainties in climatic models. One key fator when studying clouds perturbed by aerosols is determining the thermodynamic phase of hydrometeors as a function of temperature or height in the cloud. Conventional remote sensing can provide information on the thermodynamic phase of clouds over large areas, but it lacks the precision needed to understand how a single, real cloud evolves. Here we present mappings of the thermodynamic phase of droplets and ice particles in individual convective clouds in the Amazon Basin, by analyzing the emerging infrared radiance on cloud sides (Martins et al., 2011). In flights over the Amazon Basin with a research aircraft Martins et al. (2011) used imaging radiometers with spectral filters to record the emerging radiance on cloud sides at the wavelengths of 2.10 and 2.25 μm. Due to differential absorption and scattering of these wavelengths by hydrometeors in liquid or solid phases, the intensity ratio between images recorded at the two wavelengths can be used as proxy to the thermodynamic phase of these hydrometeors. In order to analyze the acquired dataset we used the MATLAB tools package, developing scripts to handle data files and derive the thermodynamic phase. In some cases parallax effects due to aircraft movement required additional data processing before calculating ratios. Only well illuminated scenes were considered, i.e. images acquired as close as possible to the backscatter vector from the incident solar radiation. It's important to notice that the intensity ratio values corresponding to a given thermodynamic phase can vary from cloud to cloud (Martins et al., 2011), however inside the same cloud the distinction between ice, water and mixed-phase is clear. Analyzing histograms of reflectance ratios 2.10/2.25 μm in selected cases, we found averages typically between 0.3 and 0.4 for ice phase hydrometeors, and between 0.5 and 0.7 for water phase droplets, consistent with the findings in Martins et al., (2011). Figure 1 shows an example of thermodynamic phase classification obtained with this technique. These experimental results can potentially be used in fast derivations of thermodynamic phase mappings in deep convective clouds, providing useful information for studies regarding aerosol-cloud interactions. Image of the ratio of reflectances at 2.10/2.25μm

  20. Digital elevation model of King Edward VII Peninsula, West Antarctica, from SAR interferometry and ICESat laser altimetry

    USGS Publications Warehouse

    Baek, S.; Kwoun, Oh-Ig; Braun, Andreas; Lu, Z.; Shum, C.K.

    2005-01-01

    We present a digital elevation model (DEM) of King Edward VII Peninsula, Sulzberger Bay, West Antarctica, developed using 12 European Remote Sensing (ERS) synthetic aperture radar (SAR) scenes and 24 Ice, Cloud, and land Elevation Satellite (ICESat) laser altimetry profiles. We employ differential interferograms from the ERS tandem mission SAR scenes acquired in the austral fall of 1996, and four selected ICESat laser altimetry profiles acquired in the austral fall of 2004, as ground control points (GCPs) to construct an improved geocentric 60-m resolution DEM over the grounded ice region. We then extend the DEM to include two ice shelves using ICESat profiles via Kriging. Twenty additional ICESat profiles acquired in 2003-2004 are used to assess the accuracy of the DEM. After accounting for radar penetration depth and predicted surface changes, including effects due to ice mass balance, solid Earth tides, and glacial isostatic adjustment, in part to account for the eight-year data acquisition discrepancy, the resulting difference between the DEM and ICESat profiles is -0.57 ?? 5.88 m. After removing the discrepancy between the DEM and ICESat profiles for a final combined DEM using a bicubic spline, the overall difference is 0.05 ?? 1.35 m. ?? 2005 IEEE.

  1. Marine Layer Clouds off the California Coast

    NASA Image and Video Library

    2017-12-08

    NASA image acquired September 27, 2012 On September 27, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured this nighttime view of low-lying marine layer clouds along the coast of California. The image was captured by the VIIRS “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe signals such as gas flares, auroras, wildfires, city lights, and reflected moonlight. An irregularly-shaped patch of high clouds hovers off the coast of California, and moonlight caused the high clouds to cast distinct shadows on the marine layer clouds below. VIIRS acquired the image when the Moon was in its waxing gibbous phase, meaning it was more than half-lit, but less than full. Low clouds pose serious hazards for air and ship traffic, but satellites have had difficulty detecting them in the past. To illustrate this, the second image shows the same scene in thermal infrared, the band that meteorologists generally use to monitor clouds at night. Only high clouds are visible; the low clouds do not show up at all because they are roughly the same temperature as the ground. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Adam Voiland. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  3. 3D reconstruction from non-uniform point clouds via local hierarchical clustering

    NASA Astrophysics Data System (ADS)

    Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo

    2017-07-01

    Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.

  4. Seasonal Change in Titan's Cloud Activity Observed with IRTF/SpeX

    NASA Astrophysics Data System (ADS)

    Schaller, Emily L.; Brown, M. E.; Roe, H. G.

    2006-09-01

    We have acquired whole disk spectra of Titan on nineteen nights with IRTF/SpeX over a three-month period in the spring of 2006. The data encompass the spectral range of 0.8 to 2.4 microns at a resolution of 375. These disk-integrated spectra allow us to determine Titan's total fractional cloud coverage and altitudes of clouds present. We find that Titan had less than 0.15% fractional cloud coverage on all but one of the nineteen nights. The near lack of cloud activity in these spectra is in sharp contrast to nearly every spectrum taken from 1995-1999 with UKIRT by Griffith et al. (1998 & 2000) who found rapidly varying clouds covering 0.5% of Titan's disk. The differences in these two similar datasets indicate a striking seasonal change in the behavior of Titan's clouds. Observations of the latitudes, magnitudes, altitudes, and frequencies of Titan's clouds as Titan moves toward southern autumnal equinox in 2009 will help elucidate when and how Titan's methane hydrological cycle changes with season.

  5. Methane, Ethane, and Nitrogen Stability on Titan

    NASA Astrophysics Data System (ADS)

    Hanley, J.; Grundy, W. M.; Thompson, G.; Dustrud, S.; Pearce, L.; Lindberg, G.; Roe, H. G.; Tegler, S.

    2017-12-01

    Many outer solar system bodies are likely to have a combination of methane, ethane and nitrogen. In particular the lakes of Titan are known to consist of these species. Understanding the past and current stability of these lakes requires characterizing the interactions of methane and ethane, along with nitrogen, as both liquids and ices. Our cryogenic laboratory setup allows us to explore ices down to 30 K through imaging, and transmission and Raman spectroscopy. Our recent work has shown that although methane and ethane have similar freezing points, when mixed they can remain liquid down to 72 K. Concurrently with the freezing point measurements we acquire transmission or Raman spectra of these mixtures to understand how the structural features change with concentration and temperature. Any mixing of these two species together will depress the freezing point of the lake below Titan's surface temperature, preventing them from freezing. We will present new results utilizing our recently acquired Raman spectrometer that allow us to explore both the liquid and solid phases of the ternary system of methane, ethane and nitrogen. In particular we will explore the effect of nitrogen on the eutectic of the methane-ethane system. At high pressure we find that the ternary creates two separate liquid phases. Through spectroscopy we determined the bottom layer to be nitrogen rich, and the top layer to be ethane rich. Identifying the eutectic, as well as understanding the liquidus and solidus points of combinations of these species, has implications for not only the lakes on the surface of Titan, but also for the evaporation/condensation/cloud cycle in the atmosphere, as well as the stability of these species on other outer solar system bodies. These results will help interpretation of future observational data, and guide current theoretical models.

  6. Infrared Extinction and the Initial Conditions for Star and Planet Formation

    NASA Technical Reports Server (NTRS)

    Lada, Charles J.

    2004-01-01

    This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular clouds and investigate the physical conditions which give rise to star and planet formation. The goals of the this program are to: 1) acquire deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds, 2) reduce and analyze the data obtained in order to produce detailed extinction maps of the clouds, 3) prepare results, where appropriate, for publication. A description of how these goals were met are included.

  7. Balloon borne Antarctic frost point measurements and their impact on polar stratospheric cloud theories

    NASA Technical Reports Server (NTRS)

    Rosen, James M.; Hofmann, D. J.; Carpenter, J. R.; Harder, J. W.; Oltmans, S. J.

    1988-01-01

    The first balloon-borne frost point measurements over Antarctica were made during September and October, 1987 as part of the NOZE 2 effort at McMurdo. The results indicate water vapor mixing ratios on the order of 2 ppmv in the 15 to 20 km region which is somewhat smaller than the typical values currently being used significantly smaller than the typical values currently being used in polar stratospheric cloud (PSC) theories. The observed water vapor mixing ratio would correspond to saturated conditions for what is thought to be the lowest stratospheric temperatures encountered over the Antarctic. Through the use of available lidar observations there appears to be significant evidence that some PSCs form at temperatures higher than the local frost point (with respect to water) in the 10 to 20 km region thus supporting the nitric acid theory of PSC composition. Clouds near 15 km and below appear to form in regions saturated with respect to water and thus are probably mostly ice water clouds although they could contain relatively small amounts of other constituents. Photographic evidence suggests that the clouds forming above the frost point probably have an appearance quite different from the lower altitude iridescent, colored nacreous clouds.

  8. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  9. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  10. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  11. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  12. Interleaved Observation Execution and Rescheduling on Earth Observing Systems

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Frank, Jeremy; Smith, David; Morris, Robert; Dungan, Jennifer

    2003-01-01

    Observation scheduling for Earth orbiting satellites solves the following problem: given a set of requests for images of the Earth, a set of instruments for acquiring those images distributed on a collecting of orbiting satellites, and a set of temporal and resource constraints, generate a set of assignments of instruments and viewing times to those requests that satisfy those constraints. Observation scheduling is often construed as a constrained optimization problem with the objective of maximizing the overall utility of the science data acquired. The utility of an image is typically based on the intrinsic importance of acquiring it (for example, its importance in meeting a mission or science campaign objective) as well as the expected value of the data given current viewing conditions (for example, if the image is occluded by clouds, its value is usually diminished). Currently, science observation scheduling for Earth Observing Systems is done on the ground, for periods covering a day or more. Schedules are uplinked to the satellites and are executed rigorously. An alternative to this scenario is to do some of the decision-making about what images are to be acquired on-board. The principal argument for this capability is that the desirability of making an observation can change dynamically, because of changes in meteorological conditions (e.g. cloud cover), unforeseen events such as fires, floods, or volcanic eruptions, or un-expected changes in satellite or ground station capability. Furthermore, since satellites can only communicate with the ground between 5% to 10% of the time, it may be infeasible to make the desired changes to the schedule on the ground, and uplink the revisions in time for the on-board system to execute them. Examples of scenarios that motivate an on-board capability for revising schedules include the following. First, if a desired visual scene is completely obscured by clouds, then there is little point in taking it. In this case, satellite resources, such as power and storage space can be better utilized taking another image that is higher quality. Second, if an unexpected but important event occurs (such as a fire, flood, or volcanic eruption), there may be good reason to take images of it, instead of expending satellite resources on some of the lower priority scheduled observations. Finally, if there is unexpected loss of capability, it may be impossible to carry out the schedule of planned observations. For example, if a ground station goes down temporarily, a satellite may not be able to free up enough storage space to continue with the remaining schedule of observations. This paper describes an approach for interleaving execution of observation schedules with dynamic schedule revision based on changes to the expected utility of the acquired images. We describe the problem in detail, formulate an algorithm for interleaving schedule revision and execution, and discuss refinements to the algorithm based on the need for search efficiency. We summarize with a brief discussion of the tests performed on the system.

  13. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  14. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  15. Comparison of the filtering models for airborne LiDAR data by three classifiers with exploration on model transfer

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Cai, Zhan; Zhang, Liang

    2018-01-01

    This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.

  16. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  17. LANDSAT 2 cumulative non-US standard catalog

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The Non-U.S. Standard Catalog lists imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referred month. Data, such as data acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  18. LANDSAT 3 world standard catalog, 1-30 September 1978. [LANDSAT imagery for September 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Imagery acquired by LANDSAT 3 which was processed and input to the data files during the referenced month is listed. Data, such as data acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  19. LANDSAT 2 world standard catalog, 1-30 September 1978. [LANDSAT imagery for September 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Imagery acquired by LANDSAT 2 which was processed and input to the data files during the referenced month is listed. Data, such as data acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  20. LANDSAT 1 cumulative US standard catalog, 1976/1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The LANDSAT 1 U.S. Cumulative Catalog lists U.S. imagery acquired by LANDSAT 1 which has been processed and input to the data files during the referenced year. Data, such as data acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found are also given.

  1. Landsat non-US standard catalog

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by Landsat 1 and 2 which was processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found are also given.

  2. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  3. Upper-Tropospheric Cloud Ice from IceCube

    NASA Astrophysics Data System (ADS)

    Wu, D. L.

    2017-12-01

    Cloud ice plays important roles in Earth's energy budget and cloud-precipitation processes. Knowledge of global cloud ice and its properties is critical for understanding and quantifying its roles in Earth's atmospheric system. It remains a great challenge to measure these variables accurately from space. Submillimeter (submm) wave remote sensing has capability of penetrating clouds and measuring ice mass and microphysical properties. In particular, the 883-GHz frequency is a highest spectral window in microwave frequencies that can be used to fill a sensitivity gap between thermal infrared (IR) and mm-wave sensors in current spaceborne cloud ice observations. IceCube is a cubesat spaceflight demonstration of 883-GHz radiometer technology. Its primary objective is to raise the technology readiness level (TRL) of 883-GHz cloud radiometer for future Earth science missions. By flying a commercial receiver on a 3U cubesat, IceCube is able to achieve fast-track maturation of space technology, by completing its development, integration and testing in 2.5 years. IceCube was successfully delivered to ISS in April 2017 and jettisoned from the International Space Station (ISS) in May 2017. The IceCube cloud-ice radiometer (ICIR) has been acquiring data since the jettison on a daytime-only operation. IceCube adopted a simple design without payload mechanism. It makes maximum utilization of solar power by spinning the spacecraft continuously about the Sun vector at a rate of 1.2° per second. As a result, the ICIR is operated under the limited resources (8.6 W without heater) and largely-varying (18°C-28°C) thermal environments. The spinning cubesat also allows ICIR to have periodical views between the Earth (atmosphere and clouds) and cold space (calibration), from which the first 883-GHz cloud map is obtained. The 883-GHz cloud radiance, sensitive to ice particle scattering, is proportional to cloud ice amount above 10 km. The ICIR cloud map acquired during June 20-July 2, 2017 shows a clear distribution of the inter-tropical convergence zone (ITCZ), as well as the classic Gill-model pattern over the Western Pacific and Indian monsoon regions. Like the ISS, the coverage of ICIR observations is limited to low-to-mid latitudes. More science results and IceCube experiments with the cubesat operation will be discussed.

  4. Classification of Aerial Photogrammetric 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.

    2017-05-01

    We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.

  5. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  6. New Hybrid Algorithms for Estimating Tree Stem Diameters at Breast Height Using a Two Dimensional Terrestrial Laser Scanner

    PubMed Central

    Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli

    2015-01-01

    In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726

  7. Segmentation and classification of road markings using MLS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2017-01-01

    Traffic signs are one of the most important safety elements in a road network. Particularly, road markings provide information about the limits and direction of each road lane, or warn the drivers about potential danger. The optimal condition of road markings contributes to a better road safety. Mobile Laser Scanning technology can be used for infrastructure inspection and specifically for traffic sign detection and inventory. This paper presents a methodology for the detection and semantic characterization of the most common road markings, namely pedestrian crossings and arrows. The 3D point cloud data acquired by a LYNX Mobile Mapper system is filtered in order to isolate reflective points in the road, and each single element is hierarchically classified using Neural Networks. State of the art results are obtained for the extraction and classification of the markings, with F-scores of 94% and 96% respectively. Finally, data from classified markings are exported to a GIS layer and maintenance criteria based on the aforementioned data are proposed.

  8. Junocam: Juno's Outreach Camera

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  9. Studies for the Loss of Atomic and Molecular Species from Io

    NASA Technical Reports Server (NTRS)

    Smyth, William H.

    1998-01-01

    Updated neutral emission rates for electron impact excitation of atomic oxygen and sulfur based upon the Collisional Radiative Equilibrium (COREQ) model have been incorporated in the neutral cloud models. An empirical model for the Io plasma torus wake has also been added in the neutral cloud model to describe important enhancements in the neutral emission rates and lifetime rates in this spatial region. New insights into Io's atmosphere and its interaction with the plasma torus are discussed. These insights are based upon an initial comparison of simultaneous lo observations on October 14, 1997, for [0I] 6300 Angstrom emissions acquired by groundbased facilities and several ultraviolet emissions acquired by HST/STIS in the form of high-spatial- resolution images for atomic oxygen and sulfur.

  10. Microphysical Processes Affecting the Pinatubo Volcanic Plume

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia

    1996-01-01

    In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.

  11. The structure and phase of cloud tops as observed by polarization lidar

    NASA Technical Reports Server (NTRS)

    Spinhirne, J. D.; Hansen, M. Z.; Simpson, J.

    1983-01-01

    High-resolution observations of the structure of cloud tops have been obtained with polarization lidar operated from a high altitude aircraft. Case studies of measurements acquired from cumuliform cloud systems are presented, two from September 1979 observations in the area of Florida and adjacent waters and a third during the May 1981 CCOPE experiment in southeast Montana. Accurate cloud top height structure and relative density of hydrometers are obtained from the lidar return signal intensity. Correlation between the signal return intensity and active updrafts was noted. Thin cirrus overlying developing turrets was observed in some cases. Typical values of the observed backscatter cross section were 0.1-5 (km/sr) for cumulonimbus tops. The depolarization ratio of the lidar signals was a function of the thermodynamic phase of cloud top areas. An increase of the cloud top depolarization with decreasing temperature was found for temperatures above and below -40 C.

  12. Cloud Properties Derived from Surface-Based Near-Infrared Spectral Transmission

    NASA Technical Reports Server (NTRS)

    Pilewskie, Peter; Twomey, S.; Gore, Warren J. Y. (Technical Monitor)

    1996-01-01

    Surface based near-infrared cloud spectral transmission measurements from a recent precipitation/cloud physics field study are used to determine cloud physical properties and relate them to other remote sensing and in situ measurements. Asymptotic formulae provide an effective means of closely approximating the qualitative and quantitative behavior of transmission computed by more laborious detailed methods. Relationships derived from asymptotic formulae are applied to measured transmission spectra to test objectively the internal consistency of data sets acquired during the field program and they confirmed the quality of the measurements. These relationships appear to be very useful in themselves, not merely as a quality control measure, but also a potentially valuable remote-sensing technique in its own right. Additional benefits from this analysis have been the separation of condensed water (cloud) transmission and water vapor transmission and the development of a method to derive cloud liquid water content.

  13. Gravity Waves Ripple over Marine Stratocumulus Clouds

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In this natural-color image from the Multi-angle Imaging SpectroRadiometer (MISR), a fingerprint-like gravity wave feature occurs over a deck of marine stratocumulus clouds. Similar to the ripples that occur when a pebble is thrown into a still pond, such 'gravity waves' sometimes appear when the relatively stable and stratified air masses associated with stratocumulus cloud layers are disturbed by a vertical trigger from the underlying terrain, or by a thunderstorm updraft or some other vertical wind shear. The stratocumulus cellular clouds that underlie the wave feature are associated with sinking air that is strongly cooled at the level of the cloud-tops -- such clouds are common over mid-latitude oceans when the air is unperturbed by cyclonic or frontal activity. This image is centered over the Indian Ocean (at about 38.9o South, 80.6o East), and was acquired on October 29, 2003.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 20545. The image covers an area of 245 kilometers x 378 kilometers, and uses data from blocks 121 to 122 within World Reference System-2 path 134.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  14. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  15. Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images

    NASA Astrophysics Data System (ADS)

    Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.

    2017-05-01

    Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  16. Quantifying the radiative and microphysical impacts of fire aerosols on cloud dynamics in the tropics using temporally offset satellite observations

    NASA Astrophysics Data System (ADS)

    Tosca, M. G.; Diner, D. J.; Garay, M. J.; Kalashnikova, O.

    2013-12-01

    Anthropogenic fires in Southeast Asia and Central America emit smoke that affects cloud dynamics, meteorology, and climate. We measured the cloud response to direct and indirect forcing from biomass burning aerosols using aerosol retrievals from the Multi-angle Imaging SpectroRadiometer (MISR) and non-synchronous cloud retrievals from the MODerate resolution Imaging Spectroradiometer (MODIS) from collocated morning and afternoon overpasses. Level 2 data from thirty-one individual scenes acquired between 2006 and 2010 were used to quantify changes in cloud fraction, cloud droplet size, cloud optical depth and cloud top temperature from morning (10:30am local time) to afternoon (1:30pm local time) in the presence of varying aerosol burdens. We accounted for large-scale meteorological differences between scenes by normalizing observed changes to the mean difference per individual scene. Elevated AODs reduced cloud fraction and cloud droplet size and increased cloud optical depths in both Southeast Asia and Central America. In mostly cloudy regions, aerosols significantly reduced cloud fraction and cloud droplet sizes, but in clear skies, cloud fraction, cloud optical thickness and cloud droplet sizes increased. In clouds with vertical development, aerosols reduced cloud fraction via semi-direct effects but spurred cloud growth via indirect effects. These results imply a positive feedback loop between anthropogenic burning and cloudiness in both Central America and Southeast Asia, and are consistent with previous studies linking smoke aerosols to both cloud reduction and convective invigoration.

  17. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  18. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  19. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  20. 3D reconstruction of wooden member of ancient architecture from point clouds

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiju; Wang, Yanmin; Li, Deren; Zhao, Jun; Song, Daixue

    2006-10-01

    This paper presents a 3D reconstruction method to model wooden member of ancient architecture from point clouds based on improved deformable model. Three steps are taken to recover the shape of wooden member. Firstly, Hessian matrix is adopted to compute the axe of wooden member. Secondly, an initial model of wooden member is made by contour orthogonal to its axis. Thirdly, an accurate model is got through the coupling effect between the initial model and the point clouds of the wooden member according to the theory of improved deformable model. Every step and algorithm is studied and described in the paper. Using the point clouds captured from Forbidden City of China, shaft member and beam member are taken as examples to test the method proposed in the paper. Results show the efficiency and robustness of the method addressed in the literature to model the wooden member of ancient architecture.

  1. Carbon Transfers and Emissions Following Harvest and Pile Burning in Coastal Douglas-fir Forests Determined from Analysis of High-Resolution UAV Imagery and Point Clouds and from Field Surveys.

    NASA Astrophysics Data System (ADS)

    Trofymow, J. A.; Gougeon, F.; Kelley, J. W.

    2017-12-01

    Forest carbon (C) models require knowledge on C transfers due to intense disturbances such as fire, harvest, and slash burning. In such events, live trees die and C transferred to detritus or exported as round wood. With burning, live and detrital C is lost as emissions. Burning can be incomplete, leaving wood, charred and scattered or in unburnt rings and piles. For harvests, all round wood volume is routinely measured, while dispersed and piled residue volumes are typically assessed in field surveys, scaled to a block. Recently, geospatial methods have been used to determine, for an entire block, piled residues using LiDAR or image point clouds (PC) and dispersed residues by analysis of high-resolution imagery. Second-growth Douglas-fir forests on eastern Vancouver Island were examined, 4 blocks at Oyster River (OR) and 2 at Northwest Bay (NB). OR blocks were cut winter 2011, piled spring 2011, field survey, aerial RGB imagery and LiDAR PC acquired fall 2011, piles burned, burn residues surveyed, and post-burn aerial RGB imagery acquired 2012. NB blocks were cut fall 2014, piled spring 2015, field survey, UAV RGB imagery and image PC acquired summer 2015, piles burned and burn residues surveyed spring 2016, and post-burn UAV RGB imagery and PC acquired fall 2016. Volume to biomass conversion used survey species proportions and wood density. At OR, round wood was 261.7 SE 13.1, firewood 1.7 SE 0.3, and dispersed residue by survey, 13.8 SE 3.6 tonnes dry mass (t dm) ha-1. Piled residues were 8.2 SE 0.9 from pile surveys vs. 25.0 SE 5.9 t dm ha-1 from LiDAR PC bulk pile volumes and packing ratios. Post-burn, piles lost 5.8 SE 0.5 from survey of burn residues vs. 18.2 SE 4.7 t dm ha-1 from pile volume changes using 2011 LiDAR PC and 2012 imagery. The percentage of initial merchantable biomass exported as round & fire wood, remaining as dispersed & piled residue, and lost to burning was, respectively, 92.5%, 5.5% and 2% using only field methods vs. 87%, 7% and 6% from dispersed residues surveys and LIDAR PC pile volumes. At NB, preliminary analysis shows the post-burn difference in 2015 to 2016 UAV PC pile volumes, was similar to that using 2015 UAV PC pile volume and 2016 orthophoto pile area burned, suggesting the two geospatial methods are comparable. Comparisons will be made for transfers in all 6 blocks using only field survey or geospatial methods.

  2. Comparision of photogrammetric point clouds with BIM building elements for construction progress monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2014-08-01

    For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.

  3. Localization of Pathology on Complex Architecture Building Surfaces

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.

    2017-02-01

    The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.

  4. Exploring topographic methods for monitoring morphological changes in mountain channels of different size and slope

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Bertoldi, Gabriele; Comiti, Francesco; Macconi, Pierpaolo; Mazzorana, Bruno

    2015-04-01

    High resolution digital elevation models (DEM) can easily be obtained using either laser scanning technology or photogrammetry with structure from motion (SFM). The scale, resolution, and accuracy can vary according to how the data is acquired, such as by helicopter, drone, or extendable pole. In the Autonomous Province of Bozen-Bolzano (Northern Italy), we had the opportunity to compare several of these techniques at different scales in mountain streams ranging from low-gradient braided rivers to steep debris flow channels. The main objective is to develop protocols for efficient monitoring of morphologic changes in different parts of the river systems. For SFM methods, we used the software "Photoscan Professional" (Agisoft) to generate densified point clouds. Both artificial and natural targets were used to georeference them. In some cases, targets were not even necessary and point clouds could be aligned with older point clouds by using the iterative closest point algorithm in the freeware "CloudCompare". At the Mareit/Mareta River, a restored braided river, an airborne laser scan survey (2011) was compared to a SFM DEM derived from a helicopter photo survey (2014) carried out (by the Autonomous Province of Bolzano) at approximately 100 m above ground. Photogrammetry point clouds had an alignment error of 1.5 cm and had three times more data coverage than laser scanning. Indeed, the large spacing and clustering of 2011 ALS swaths led to areas of no data when a 10-cm grid is developed. In the Gadria basin, a debris flow monitoring catchment, we used a sediment retention basin to compare debris flow volumes resulting from i) a drone (by the "Mavtech" company) survey at 10 m above ground (with GoPro camera), ii) a 5-m pole-mounted camera (with Canon EOS 700D) and iii) a 3-m pole-mounted camera (with GoPro Hero Silver3+) to a iv) TLS survey. As the drone had limited load capacity (especially at high elevations) we used the lightweight GoPro Hero 3+, but due to the its low image quality and low survey elevations, more reference points were needed which became impractical. In contrast, the TLS survey was heavily influenced by the shadowing of the rough surfaces. Even though the pole-mounted camera is of lower technology, it has proven to be accurate (< 2cm RMSE) with better data coverage than the TLS due to the aerial perspective. Furthermore, the pole-mounted camera is the most field efficient due to the dependency of one person (little training required), little weather limitations, and a five times faster data acquisition than TLS surveys. A shorter pole with a GoPro camera allows faster and easier surveys but with the drawback of less precision and more noise. We easily obtained up to 6 DEMs of difference (DoD)within one year in active channel reaches and gullies in the Gadria catchment. This allowed to capture morphologic changes after important debris flow events, bedload transport, and bank erosion. DoDs also allowed us to monitor damages of structures due to erosional and depositional processes on alluvial fans. Our study highlights the importance of the bird's eye view which can be easily obtained by low cost SFM photogrammetry.

  5. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  6. Photogrammetric DSM denoising

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  7. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  8. High spatial resolution mapping of folds and fractures using Unmanned Aerial Vehicle (UAV) photogrammetry

    NASA Astrophysics Data System (ADS)

    Cruden, A. R.; Vollgger, S.

    2016-12-01

    The emerging capability of UAV photogrammetry combines a simple and cost-effective method to acquire digital aerial images with advanced computer vision algorithms that compute spatial datasets from a sequence of overlapping digital photographs from various viewpoints. Depending on flight altitude and camera setup, sub-centimeter spatial resolution orthophotographs and textured dense point clouds can be achieved. Orientation data can be collected for detailed structural analysis by digitally mapping such high-resolution spatial datasets in a fraction of time and with higher fidelity compared to traditional mapping techniques. Here we describe a photogrammetric workflow applied to a structural study of folds and fractures within alternating layers of sandstone and mudstone at a coastal outcrop in SE Australia. We surveyed this location using a downward looking digital camera mounted on commercially available multi-rotor UAV that autonomously followed waypoints at a set altitude and speed to ensure sufficient image overlap, minimum motion blur and an appropriate resolution. The use of surveyed ground control points allowed us to produce a geo-referenced 3D point cloud and an orthophotograph from hundreds of digital images at a spatial resolution < 10 mm per pixel, and cm-scale location accuracy. Orientation data of brittle and ductile structures were semi-automatically extracted from these high-resolution datasets using open-source software. This resulted in an extensive and statistically relevant orientation dataset that was used to 1) interpret the progressive development of folds and faults in the region, and 2) to generate a 3D structural model that underlines the complex internal structure of the outcrop and quantifies spatial variations in fold geometries. Overall, our work highlights how UAV photogrammetry can contribute to new insights in structural analysis.

  9. Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results

    NASA Astrophysics Data System (ADS)

    Piras, M.; Grasso, N.; Jabbar, A. Abdul

    2017-08-01

    Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.

  10. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    NASA Astrophysics Data System (ADS)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  11. Uavs to Assess the Evolution of Embryo Dunes

    NASA Astrophysics Data System (ADS)

    Taddia, Y.; Corbau, C.; Zambello, E.; Russo, V.; Simeoni, U.; Russo, P.; Pellegrinelli, A.

    2017-08-01

    The balance of a coastal environment is particularly complex: the continuous formation of dunes, their destruction as a result of violent storms, the growth of vegetation and the consequent growth of the dunes themselves are phenomena that significantly affect this balance. This work presents an approach to the long-term monitoring of a complex dune system by means of Unmanned Aerial Vehicles (UAVs). Four different surveys were carried out between November 2015 and November 2016. Aerial photogrammetric data were acquired during flights by a DJI Phantom 2 and a DJI Phantom 3 with cameras in a nadiral arrangement. GNSS receivers in Network Real Time Kinematic (NRTK) mode were used to frame models in the European Terrestrial Reference System. Processing of the captured images consisted in reconstruction of a three-dimensional model using the principles of Structure from Motion (SfM). Particular care was necessary due to the vegetation: filtering of the dense cloud, mainly based on slope detection, was performed to minimize this issue. Final products of the SfM approach were represented by Digital Elevation Models (DEMs) of the sandy coastal environment. Each model was validated by comparison through specially surveyed points. Other analyses were also performed, such as cross sections and computing elevation variations over time. The use of digital photogrammetry by UAVs is particularly reliable: fast acquisition of the images, reconstruction of high-density point clouds, high resolution of final elevation models, as well as flexibility, low cost and accuracy comparable with other available techniques.

  12. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  13. LANDSAT US standard catalog, 1-31 March 1976

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  14. LANDSAT: Non-US standard catalog. [LANDSAT imagery for August 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The non-U. S. Standard Catalog lists non-U. S. imagery acquired by LANDSAT 1 and 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  15. LANDSAT non-US standard catalog, 1-31 May 1976

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The non-U.S. standard catalog lists non-U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  16. LANDSAT 2 cumulative US standard catalog. [LANDSAT imagery for January 1976

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality, are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  17. LANDSAT: US standard catalog, 1 January 1977 through 31 January 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  18. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for aerial photogrammetry will also be presented.

  19. Chance Encounter with a Stratospheric Kerosene Rocket Plume From Russia Over California

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Wilson, J. C.; Ross, M. N.; Brock, C. A.; Sheridan, P. J.; Schoeberl, M. R.; Lait, L. R.; Bui, T. P.; Loewenstein, M.; Podolske, J. R.; hide

    2000-01-01

    A high-altitude aircraft flight on April 18, 1997 detected an enormous aerosol cloud at 20 km altitude near California (37 N). Not visually observed, the cloud had high concentrations of soot and sulfate aerosol, and was over 180 km in horizontal extent. The cloud was probably a large hydrocarbon fueled vehicle, most likely from rocket motors burning liquid oxygen and kerosene. One of two Russian Soyuz rockets could have produced the cloud: a launch from the Baikonur Cosmodrome, Kazakhstan on April 6; or from Plesetsk, Russia on April 9. Parcel trajectories and long-lived trace gas concentrations suggest the Baikonur launch as the cloud source. Cloud trajectories do not trace the Soyuz plume from Asia to North America, illustrating the uncertainties of point-to-point trajectories. This cloud encounter is the only stratospheric measurement of a hydrocarbon fuel powered rocket.

  20. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  1. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  2. Cloud Motion in the GOCI COMS Ocean Colour Data

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Franz, Bryan A.; Mannino, Antonio; Ahn, Jae-Hyun

    2016-01-01

    The Geostationary Ocean Colour Imager (GOCI) instrument, on Koreas Communications, Oceans, and Meteorological Satellite (COMS), can produce a spectral artefact arising from the motion of clouds the cloud is spatially shifted and the amount of shift varies by spectral band. The length of time it takes to acquire all eight GOCI bands for a given slot (portion of a scene) is sucient to require that cloud motion be taken into account to fully mask or correct the eects of clouds in all bands. Inter-band correlations can be used to measure the amount of cloud shift, which can then be used to adjust the cloud mask so that the union of all shifted masks can act as a mask for all bands. This approach reduces the amount of masking required versus a simple expansion of the mask in all directions away from clouds. Cloud motion can also aect regions with unidentied clouds thin or fractional clouds that evade the cloud identication process yielding degraded quality in retrieved ocean colour parameters. Areas with moving and unidentied clouds require more elaborate masking algo-rithms to remove these degraded retrievals. Correction for the eects of moving fractional clouds may also be possible. The cloud shift information can be used to determine cloud motion and thus wind at the cloud levels on sub-minute timescales. The benecial and negative eects of moving clouds should be con-sidered for any ocean colour instrument design and associated data processing plans.

  3. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  4. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  5. Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Palmer, J. M. (Principal Investigator)

    1985-01-01

    The results of analyses of Thematic Mapper (TM) images acquired on July 8 and October 28, 1984, and of a check of the calibration of the 1.22-m integrating sphere at Santa Barbara Research Center (SBRC) are described. The results obtained from the in-flight calibration attempts disagree with the pre-flight calibrations for bands 2 and 4. Considerable effort was expended in an attempt to explain the disagreement. The difficult point to explain is that the difference between the radiances predicted by the radiative transfer code (the code radiances) and the radiances predicted by the preflight calibration (the pre-flight radiances) fluctuate with spectral band. Because the spectral quantities measured at White Sands show little change with spectral band, these fluctuations are not anticipated. Analyses of other targets at White Sands such as clouds, cloud shadows, and water surfaces tend to support the pre-flight and internal calibrator calibrations. The source of the disagreement has not been identified. It could be due to: (1) a computational error in the data reduction; (2) an incorrect assumption in the input to the radiative transfer code; or (3) incorrect operation of the field equipment.

  6. Analysis of stratocumulus cloud fields using LANDSAT imagery: Size distributions and spatial separations

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1990-01-01

    Stratocumulus cloud fields in the FIRE IFO region are analyzed using LANDSAT Thematic Mapper imagery. Structural properties such as cloud cell size distribution, cell horizontal aspect ratio, fractional coverage and fractal dimension are determined. It is found that stratocumulus cloud number densities are represented by a power law. Cell horizontal aspect ratio has a tendency to increase at large cell sizes, and cells are bi-fractal in nature. Using LANDSAT Multispectral Scanner imagery for twelve selected stratocumulus scenes acquired during previous years, similar structural characteristics are obtained. Cloud field spatial organization also is analyzed. Nearest-neighbor spacings are fit with a number of functions, with Weibull and Gamma distributions providing the best fits. Poisson tests show that the spatial separations are not random. Second order statistics are used to examine clustering.

  7. Dissipation of Titans north polar cloud at northern spring equinox

    USGS Publications Warehouse

    Le, Mouelic S.; Rannou, P.; Rodriguez, S.; Sotin, Christophe; Griffith, C.A.; Le, Corre L.; Barnes, J.W.; Brown, R.H.; Baines, K.H.; Buratti, B.J.; Clark, R.N.; Nicholson, P.D.; Tobie, G.

    2012-01-01

    Saturns Moon Titan has a thick atmosphere with a meteorological cycle. We report on the evolution of the giant cloud system covering its north pole using observations acquired by the Visual and Infrared Mapping Spectrometer onboard the Cassini spacecraft. A radiative transfer model in spherical geometry shows that the clouds are found at an altitude between 30 and 65 km. We also show that the polar cloud system vanished progressively as Titan approached equinox in August 2009, revealing at optical wavelengths the underlying sea known as Kraken Mare. This decrease of activity suggests that the north-polar downwelling has begun to shut off. Such a scenario is compared with the Titan global circulation model of Rannou et al. (2006), which predicts a decrease of cloud coverage in northern latitudes at the same period of time. ?? 2011 Elsevier Ltd. All rights reserved.

  8. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  9. Comparison of roadway roughness derived from LIDAR and SFM 3D point clouds.

    DOT National Transportation Integrated Search

    2015-10-01

    This report describes a short-term study undertaken to investigate the potential for using dense three-dimensional (3D) point : clouds generated from light detection and ranging (LIDAR) and photogrammetry to assess roadway roughness. Spatially : cont...

  10. Extractive biodegradation and bioavailability assessment of phenanthrene in the cloud point system by Sphingomonas polyaromaticivorans.

    PubMed

    Pan, Tao; Deng, Tao; Zeng, Xinying; Dong, Wei; Yu, Shuijing

    2016-01-01

    The biological treatment of polycyclic aromatic hydrocarbons is an important issue. Most microbes have limited practical applications because of the poor bioavailability of polycyclic aromatic hydrocarbons. In this study, the extractive biodegradation of phenanthrene by Sphingomonas polyaromaticivorans was conducted by introducing the cloud point system. The cloud point system is composed of a mixture of (40 g/L) Brij 30 and Tergitol TMN-3, which are nonionic surfactants, in equal proportions. After phenanthrene degradation, a higher wet cell weight and lower phenanthrene residue were obtained in the cloud point system than that in the control system. According to the results of high-performance liquid chromatography, the residual phenanthrene preferred to partition from the dilute phase into the coacervate phase. The concentration of residual phenanthrene in the dilute phase (below 0.001 mg/L) is lower than its solubility in water (1.18 mg/L) after extractive biodegradation. Therefore, dilute phase detoxification was achieved, thus indicating that the dilute phase could be discharged without causing phenanthrene pollution. Bioavailability was assessed by introducing the apparent logP in the cloud point system. Apparent logP decreased significantly, thus indicating that the bioavailability of phenanthrene increased remarkably in the system. This study provides a potential application of biological treatment in water and soil contaminated by phenanthrene.

  11. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  12. Subtropical Cirrus Properties Derived from GSFC Scanning Raman Lidar Measurements during CAMEX 3

    NASA Technical Reports Server (NTRS)

    Whiteman, D. N.; Wang, Z.; Demoz, B.

    2004-01-01

    The NASA/GSFC Scanning Raman Lidar (SRL) was stationed on Andros Island, Bahamas for the third Convection and Moisture Experiment (CAMEX 3) held in August - September, 1998 and acquired an extensive set of water vapor and cirrus cloud measurements (Whiteman et al., 2001). The cirrus data studied here have been segmented by generating mechanism. Distinct differences in the optical properties of the clouds are found when the cirrus are hurricane-induced versus thunderstom-induced. Relationships of cirrus cloud optical depth, mean cloud temperature, and layer mean extinction-to-backscatter ratio (S) are presented and compared with mid-latitude and tropical results. Hurricane-induced cirrus clouds are found to generally possess lower values of S than thunderstorm induced clouds. Comparison of these measurements of S are made with other studies revealing at times large differences in the measurements. Given that S is a required parameter for spacebased retrievals of cloud optical depth using backscatter lidar, these large diffaences in S measurements present difficulties for space-based retrievals of cirrus cloud extinction and optical depth.

  13. The FIRE Project

    NASA Technical Reports Server (NTRS)

    Mcdougal, D.

    1986-01-01

    The International Satellite Cloud Climatology Project's (ISCCP) First ISCCP Regional Experiment (FIRE) project is a program to validate the cloud parameters derived by the ISCCP. The 4- to 5-year program will concentrate on clouds in the continental United States, particularly cirrus and marine stratocumulus clouds. As part of the validation process, FIRE will acquire satellite, aircraft, balloon, and surface data. These data (except for the satellite data) will be amalgamated into one common data set. Plans are to generate a standardized format structure for use in the PCDS. Data collection will begin in April 1986, but will not be available to the general scientific community until 1987 or 1988. Additional pertinent data sets already reside in the PCDS. Other qualifications of the PCDS for use in this validation program were enumerated.

  14. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  15. Russia

    Atmospheric Science Data Center

    2013-04-16

    article title:  Smoke and Clouds over Russia     View Larger Image ... of Multi-angle Imaging SpectroRadiometer (MISR) images of Russia's far east Khabarovsk region. The images were acquired on May 13, 2001 ...

  16. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  17. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  18. Feasibility of Smartphone Based Photogrammetric Point Clouds for the Generation of Accessibility Maps

    NASA Astrophysics Data System (ADS)

    Angelats, E.; Parés, M. E.; Kumar, P.

    2018-05-01

    Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.

  19. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.

  20. Venus' night side atmospheric dynamics using near infrared observations from VEx/VIRTIS and TNG/NICS

    NASA Astrophysics Data System (ADS)

    Mota Machado, Pedro; Peralta, Javier; Luz, David; Gonçalves, Ruben; Widemann, Thomas; Oliveira, Joana

    2016-10-01

    We present night side Venus' winds based on coordinated observations carried out with Venus Express' VIRTIS instrument and the Near Infrared Camera (NICS) of the Telescopio Nazionale Galileo (TNG). With NICS camera, we acquired images of the continuum K filter at 2.28 μm, which allows to monitor motions at the Venus' lower cloud level, close to 48 km altitude. We will present final results of cloud tracked winds from ground-based TNG observations and from coordinated space-based VEx/VIRTIS observations.The Venus' lower cloud deck is centred at 48 km of altitude, where fundamental dynamical exchanges that help maintain superrotation are thought to occur. The lower Venusian atmosphere is a strong source of thermal radiation, with the gaseous CO2 component allowing radiation to escape in windows at 1.74 and 2.28 μm. At these wavelengths radiation originates below 35 km and unit opacity is reached at the lower cloud level, close to 48 km. Therefore, it is possible to observe the horizontal cloud structure, with thicker clouds seen silhouetted against the bright thermal background from the low atmosphere. By continuous monitoring of the horizontal cloud structure at 2.28 μm (NICS Kcont filter), it is possible to determine wind fields using the technique of cloud tracking. We acquired a series of short exposures of the Venus disk. Cloud displacements in the night side of Venus were computed taking advantage of a phase correlation semi-automated technique. The Venus apparent diameter at observational dates was greater than 32" allowing a high spatial precision. The 0.13" pixel scale of the NICS narrow field camera allowed to resolve ~3-pixel displacements. The absolute spatial resolution on the disk was ~100 km/px at disk center, and the (0.8-1") seeing-limited resolution was ~400 km/px. By co-adding the best images and cross-correlating regions of clouds the effective resolution was significantly better than the seeing-limited resolution. In order to correct for scattered light from the (saturated) day side crescent into the night side, a set of observations with a Br Υ filter were performed. Cloud features are invisible at this wavelength, and this technique allowed for a good correction of scattered light.

  1. Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information

    NASA Astrophysics Data System (ADS)

    Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.

    2017-09-01

    Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  2. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    NASA Astrophysics Data System (ADS)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  3. Cloud Point and Liquid-Liquid Equilibrium Behavior of Thermosensitive Polymer L61 and Salt Aqueous Two-Phase System.

    PubMed

    Rao, Wenwei; Wang, Yun; Han, Juan; Wang, Lei; Chen, Tong; Liu, Yan; Ni, Liang

    2015-06-25

    The cloud point of thermosensitive triblock polymer L61, poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide) (PEO-PPO-PEO), was determined in the presence of various electrolytes (K2HPO4, (NH4)3C6H5O7, and K3C6H5O7). The cloud point of L61 was lowered by the addition of electrolytes, and the cloud point of L61 decreased linearly with increasing electrolyte concentration. The efficacy of electrolytes on reducing cloud point followed the order: K3C6H5O7 > (NH4)3C6H5O7 > K2HPO4. With the increase in salt concentration, aqueous two-phase systems exhibited a phase inversion. In addition, increasing the temperature reduced the concentration of salt needed that could promote phase inversion. The phase diagrams and liquid-liquid equilibrium data of the L61-K2HPO4/(NH4)3C6H5O7/K3C6H5O7 aqueous two-phase systems (before the phase inversion but also after phase inversion) were determined at T = (25, 30, and 35) °C. Phase diagrams of aqueous two-phase systems were fitted to a four-parameter empirical nonlinear expression. Moreover, the slopes of the tie-lines and the area of two-phase region in the diagram have a tendency to rise with increasing temperature. The capacity of different salts to induce aqueous two-phase system formation was the same order as the ability of salts to reduce the cloud point.

  4. LANDSAT US standard catalog, 1-30 September 1977. [LANDSAT imagery for September, 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U. S. Standard Catalog lists U. S. imagery acquired by LANDSAT 1 and 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  5. VirTUal remoTe labORatories managEment System (TUTORES): Using Cloud Computing to Acquire University Practical Skills

    ERIC Educational Resources Information Center

    Caminero, Agustín C.; Ros, Salvador; Hernández, Roberto; Robles-Gómez, Antonio; Tobarra, Llanos; Tolbaños Granjo, Pedro J.

    2016-01-01

    The use of practical laboratories is a key in engineering education in order to provide our students with the resources needed to acquire practical skills. This is specially true in the case of distance education, where no physical interactions between lecturers and students take place, so virtual or remote laboratories must be used. UNED has…

  6. LANDSAT Non-US standard catalog, 1-31 December 1975. [LANDSAT imagery for December 1975

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by LANDSAT 1 and 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  7. Where Europe meets Africa

    NASA Image and Video Library

    2005-03-23

    Data from a portion of the imagery acquired by NASA Terra spacecraft during 2000-2002 were combined to create this cloud-free natural-color mosaic of southwestern Europe and northwestern Morocco and Algeria.

  8. Quantifying Above-Cloud Aerosols through Integrating Multi-Sensor Measurements from A-Train Satellites

    NASA Technical Reports Server (NTRS)

    Zhang, Yan

    2012-01-01

    Quantifying above-cloud aerosols can help improve the assessment of aerosol intercontinental transport and climate impacts. Large-scale measurements of aerosol above low-level clouds had been generally unexplored until very recently when CALIPSO lidar started to acquire aerosol and cloud profiles in June 2006. Despite CALIPSO s unique capability of measuring above-cloud aerosol optical depth (AOD), such observations are substantially limited in spatial coverage because of the lidar s near-zero swath. We developed an approach that integrates measurements from A-Train satellite sensors (including CALIPSO lidar, OMI, and MODIS) to extend CALIPSO above-cloud AOD observations to substantially larger areas. We first examine relationships between collocated CALIPSO above-cloud AOD and OMI absorbing aerosol index (AI, a qualitative measure of AOD for elevated dust and smoke aerosol) as a function of MODIS cloud optical depth (COD) by using 8-month data in the Saharan dust outflow and southwest African smoke outflow regions. The analysis shows that for a given cloud albedo, above-cloud AOD correlates positively with AI in a linear manner. We then apply the derived relationships with MODIS COD and OMI AI measurements to derive above-cloud AOD over the whole outflow regions. In this talk, we will present spatial and day-to-day variations of the above-cloud AOD and the estimated direct radiative forcing by the above-cloud aerosols.

  9. Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.

    2018-01-01

    We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  10. Comparing and characterizing three-dimensional point clouds derived by structure from motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Schwind, Michael

    Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.

  11. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  12. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  13. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less

  14. End-group-functionalized poly(N,N-diethylacrylamide) via free-radical chain transfer polymerization: Influence of sulfur oxidation and cyclodextrin on self-organization and cloud points in water

    PubMed Central

    Reinelt, Sebastian; Steinke, Daniel

    2014-01-01

    Summary In this work we report the synthesis of thermo-, oxidation- and cyclodextrin- (CD) responsive end-group-functionalized polymers, based on N,N-diethylacrylamide (DEAAm). In a classical free-radical chain transfer polymerization, using thiol-functionalized 4-alkylphenols, namely 3-(4-(1,1-dimethylethan-1-yl)phenoxy)propane-1-thiol and 3-(4-(2,4,4-trimethylpentan-2-yl)phenoxy)propane-1-thiol, poly(N,N-diethylacrylamide) (PDEAAm) with well-defined hydrophobic end-groups is obtained. These end-group-functionalized polymers show different cloud point values, depending on the degree of polymerization and the presence of randomly methylated β-cyclodextrin (RAMEB-CD). Additionally, the influence of the oxidation of the incorporated thioether linkages on the cloud point is investigated. The resulting hydrophilic sulfoxides show higher cloud point values for the lower critical solution temperature (LCST). A high degree of functionalization is supported by 1H NMR-, SEC-, FTIR- and MALDI–TOF measurements. PMID:24778720

  15. Strategies GeoCape Intelligent Observation Studies @ GSFC

    NASA Technical Reports Server (NTRS)

    Cappelaere, Pat; Frye, Stu; Moe, Karen; Mandl, Dan; LeMoigne, Jacqueline; Flatley, Tom; Geist, Alessandro

    2015-01-01

    This presentation provides information a summary of the tradeoff studies conducted for GeoCape by the GSFC team in terms of how to optimize GeoCape observation efficiency. Tradeoffs include total ground scheduling with simple priorities, ground scheduling with cloud forecast, ground scheduling with sub-area forecast, onboard scheduling with onboard cloud detection and smart onboard scheduling and onboard image processing. The tradeoffs considered optimzing cost, downlink bandwidth and total number of images acquired.

  16. A new markerless patient-to-image registration method using a portable 3D scanner.

    PubMed

    Fan, Yifeng; Jiang, Dongsheng; Wang, Manning; Song, Zhijian

    2014-10-01

    Patient-to-image registration is critical to providing surgeons with reliable guidance information in the application of image-guided neurosurgery systems. The conventional point-matching registration method, which is based on skin markers, requires expensive and time-consuming logistic support. Surface-matching registration with facial surface scans is an alternative method, but the registration accuracy is unstable and the error in the more posterior parts of the head is usually large because the scan range is limited. This study proposes a new surface-matching method using a portable 3D scanner to acquire a point cloud of the entire head to perform the patient-to-image registration. A new method for transforming the scan points from the device space into the patient space without calibration and tracking was developed. Five positioning targets were attached on a reference star, and their coordinates in the patient space were measured prior. During registration, the authors moved the scanner around the head to scan its entire surface as well as the positioning targets, and the scanner generated a unique point cloud in the device space. The coordinates of the positioning targets in the device space were automatically detected by the scanner, and a spatial transformation from the device space to the patient space could be calculated by registering them to their coordinates in the patient space that had been measured prior. A three-step registration algorithm was then used to register the patient space to the image space. The authors evaluated their method on a rigid head phantom and an elastic head phantom to verify its practicality and to calculate the target registration error (TRE) in different regions of the head phantoms. The authors also conducted an experiment with a real patient's data to test the feasibility of their method in the clinical environment. In the phantom experiments, the mean fiducial registration error between the device space and the patient space, the mean surface registration error, and the mean TRE of 15 targets on the surface of each phantom were 0.34 ± 0.01 mm and 0.33 ± 0.02 mm, 1.17 ± 0.02 mm and 1.34 ± 0.10 mm, and 1.06 ± 0.11 mm and 1.48 ± 0.21 mm, respectively. When grouping the targets according to their positions on the head, high accuracy was achieved in all parts of the head, and the TREs were similar across different regions. The authors compared their method with the current surface registration methods that use only a part of the facial surface on the elastic phantom, and the mean TRE of 15 targets was 1.48 ± 0.21 mm and 1.98 ± 0.53 mm, respectively. In a clinical experiment, the mean TRE of seven targets on the patient's head surface was 1.92 ± 0.18 mm, which was sufficient to meet clinical requirements. The proposed surface-matching registration method provides sufficient registration accuracy even in the posterior area of the head. The 3D point cloud of the entire head, including the facial surface and the back of the head, can be easily acquired using a portable 3D scanner. The scanner does not need to be calibrated prior or tracked by the optical tracking system during scanning.

  17. The effect of short ground vegetation on terrestrial laser scans at a local scale

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Powrie, William; Smethurst, Joel; Atkinson, Peter M.; Einstein, Herbert

    2014-09-01

    Terrestrial laser scanning (TLS) can record a large amount of accurate topographical information with a high spatial accuracy over a relatively short period of time. These features suggest it is a useful tool for topographical survey and surface deformation detection. However, the use of TLS to survey a terrain surface is still challenging in the presence of dense ground vegetation. The bare ground surface may not be illuminated due to signal occlusion caused by vegetation. This paper investigates vegetation-induced elevation error in TLS surveys at a local scale and its spatial pattern. An open, relatively flat area vegetated with dense grass was surveyed repeatedly under several scan conditions. A total station was used to establish an accurate representation of the bare ground surface. Local-highest-point and local-lowest-point filters were applied to the point clouds acquired for deriving vegetation height and vegetation-induced elevation error, respectively. The effects of various factors (for example, vegetation height, edge effects, incidence angle, scan resolution and location) on the error caused by vegetation are discussed. The results are of use in the planning and interpretation of TLS surveys of vegetated areas.

  18. Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Stoter, J.

    2016-06-01

    Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

  19. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  20. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  1. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  2. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  3. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  4. Terrestrial laser scanning in monitoring of anthropogenic objects

    NASA Astrophysics Data System (ADS)

    Zaczek-Peplinska, Janina; Kowalska, Maria

    2017-12-01

    The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.

  5. Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective

    PubMed Central

    Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso

    2016-01-01

    Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245

  6. Analyzing Data Remnant Remains on User Devices to Determine Probative Artifacts in Cloud Environment.

    PubMed

    Ahmed, Abdulghani Ali; Xue Li, Chua

    2018-01-01

    Cloud storage service allows users to store their data online, so that they can remotely access, maintain, manage, and back up data from anywhere via the Internet. Although helpful, this storage creates a challenge to digital forensic investigators and practitioners in collecting, identifying, acquiring, and preserving evidential data. This study proposes an investigation scheme for analyzing data remnants and determining probative artifacts in a cloud environment. Using pCloud as a case study, this research collected the data remnants available on end-user device storage following the storing, uploading, and accessing of data in the cloud storage. Data remnants are collected from several sources, including client software files, directory listing, prefetch, registry, network PCAP, browser, and memory and link files. Results demonstrate that the collected remnants data are beneficial in determining a sufficient number of artifacts about the investigated cybercrime. © 2017 American Academy of Forensic Sciences.

  7. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Luke, Edward

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement (ARM) site are used to classify cloud phase within a deep convective cloud in a shallow to deep convection transitional case. The cloud cannot be fully observed by a lidar due to signal attenuation. Thus we develop an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectramore » from vertically pointing Ka band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid, indicating complexity to how ice growth and diabatic heating occurs in the vertical structure of the cloud.« less

  8. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    NASA Astrophysics Data System (ADS)

    Riihimaki, L. D.; Comstock, J. M.; Luke, E.; Thorsen, T. J.; Fu, Q.

    2017-07-01

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.

  9. Remote sensing of smoke, clouds, and radiation using AVIRIS during SCAR experiments

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Remer, Lorraine; Kaufman, Yorman J.

    1995-01-01

    During the past two years, researchers from several institutes joined together to take part in two SCAR experiments. The SCAR-A (Sulfates, Clouds And Radiation - Atlantic) took place in the mid-Atlantic region of the United States in July, 1993. remote sensing data were acquired with the Airborne Visible Infrared Imaging Spectrometer (AVIRIS), the MODIS Airborne Simulator (MAS), and a RC-10 mapping camera from an ER-2 aircraft at 20 km. In situ measurements of aerosol and cloud microphysical properties were made with a variety of instruments equipped on the University of Washington's C-131A research aircraft. Ground based measurements of aerosol optical depths and particle size distributions were made using a network of sunphotometers. The main purpose of SCAR-A experiment was to study the optical, physical and chemical properties of sulfate aerosols and their interaction with clouds and radiation. Sulfate particles are believed to affect the energy balance of the earth by directly reflecting solar radiation back to space and by increasing the cloud albedo. The SCAR-C (Smoke, Clouds And Radiation - California) took place on the west coast areas during September - October of 1994. Sets of aircraft and ground-based instruments, similar to those used during SCAR-A, were used during SCAR-C. Remote sensing of fires and smoke from AVIRIS and MAS imagers on the ER-2 aircraft was combined with a complete in situ characterization of the aerosol and trace gases from the C-131A aircraft of the University of Washington and the Cesna aircraft from the U.S. Forest Service. The comprehensive data base acquired during SCAR-A and SCAR-C will contribute to a better understanding of the role of clouds and aerosols in global change studies. The data will also be used to develop satellite remote sensing algorithms from MODIS on the Earth Observing System.

  10. Effects of Data Quality on the Characterization of Aerosol Properties from Multiple Sensors

    NASA Technical Reports Server (NTRS)

    Petrenko, Maksym; Ichoku, Charles; Leptoukh, Gregory

    2011-01-01

    Cross-comparison of aerosol properties between ground-based and spaceborne measurements is an important validation technique that helps to investigate the uncertainties of aerosol products acquired using spaceborne sensors. However, it has been shown that even minor differences in the cross-characterization procedure may significantly impact the results of such validation. Of particular consideration is the quality assurance I quality control (QA/QC) information - an auxiliary data indicating a "confidence" level (e.g., Bad, Fair, Good, Excellent, etc.) conferred by the retrieval algorithms on the produced data. Depending on the treatment of available QA/QC information, a cross-characterization procedure has the potential of filtering out invalid data points, such as uncertain or erroneous retrievals, which tend to reduce the credibility of such comparisons. However, under certain circumstances, even high QA/QC values may not fully guarantee the quality of the data. For example, retrievals in proximity of a cloud might be particularly perplexing for an aerosol retrieval algorithm, resulting in an invalid data that, nonetheless, could be assigned a high QA/QC confidence. In this presentation, we will study the effects of several QA/QC parameters on cross-characterization of aerosol properties between the data acquired by multiple spaceborne sensors. We will utilize the Multi-sensor Aerosol Products Sampling System (MAPSS) that provides a consistent platform for multi-sensor comparison, including collocation with measurements acquired by the ground-based Aerosol Robotic Network (AERONET), The multi-sensor spaceborne data analyzed include those acquired by the Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and CalipsoCALIOP satellite instruments.

  11. Ship Tracks in the Sky

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Because clouds represent an area of great uncertainty in studies of global climate, scientists are interested in better understanding the processes by which clouds form and change over time. In recent years, scientists have turned their attention to the ways in which human-produced aerosol pollution modifies clouds. One area that has drawn scientists' attention is 'ship tracks,' or clouds that form from the sulfate aerosols released by large ships. Although ships are not significant sources of pollution themselves, they do release enough sulfur dioxide in the exhaust from their smokestacks to modify overlying clouds. Specifically, the aerosol particles formed by the ship exhaust in the atmosphere cause the clouds to be more reflective, carry more water, and possibly inhibit them from precipitating. This is one example of how humans have been creating and modifying clouds for generations through the burning of fossil fuels. This image was acquired over the northern Pacific Ocean by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite, on April 29, 2002. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC

  12. Titan's Methane Hydrological Cycle: Detection of Seasonal Change

    NASA Astrophysics Data System (ADS)

    Schaller, E. L.; Brown, M. E.; Roe, H. G.

    2007-08-01

    We have acquired whole disk spectra of Titan on over 100 nights with IRTF/SpeX during the 2006-2007 Titan season. The data encompass the spectral range of 0.8 to 2.4 microns at a resolution of 375. These disk- integrated spectra allow us to determine Titan's total fractional cloud coverage and altitudes of clouds present. The near lack of tropospheric cloud activity in these spectra is in sharp contrast to nearly every spectrum taken from 1995-1999 with UKIRT by Griffith et al. (1998 & 2000) who found rapidly varying clouds covering 0.5-9% of Titan's disk. The differences in these two similar datasets indicate a striking seasonal change in the behavior of Titan's clouds. Adaptive optics observations from Keck and Gemini also show markedly decreased cloud activity in the late southern summer era compared with the period surrounding southern summer solstice (October 2002). Observations of the latitudes, magnitudes, altitudes, and frequencies of Titan's clouds as Titan moves toward southern autumnal equinox in 2009 will help elucidate when and how Titan's methane hydrological cycle changes with season.

  13. Pre-Dawn Clouds Over Mars

    NASA Image and Video Library

    1997-08-28

    These are more wispy blue clouds from Sol 39 as seen by the Imager for Mars Pathfinder. The bright clouds near the bottom are about 30 degrees above the horizon. The clouds are believed to be at an altitude of 10 to 15 km, and are thought to be made of small water ice particles. The picture was taken about 35 minutes before sunrise. Sojourner spent 83 days of a planned seven-day mission exploring the Martian terrain, acquiring images, and taking chemical, atmospheric and other measurements. The final data transmission received from Pathfinder was at 10:23 UTC on September 27, 1997. Although mission managers tried to restore full communications during the following five months, the successful mission was terminated on March 10, 1998. http://photojournal.jpl.nasa.gov/catalog/PIA00919

  14. Comparison of cloud top heights derived from FY-2 meteorological satellites with heights derived from ground-based millimeter wavelength cloud radar

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Wang, Zhenhui; Cao, Xiaozhong; Tao, Fa

    2018-01-01

    Clouds are currently observed by both ground-based and satellite remote sensing techniques. Each technique has its own strengths and weaknesses depending on the observation method, instrument performance and the methods used for retrieval. It is important to study synergistic cloud measurements to improve the reliability of the observations and to verify the different techniques. The FY-2 geostationary orbiting meteorological satellites continuously observe the sky over China. Their cloud top temperature product can be processed to retrieve the cloud top height (CTH). The ground-based millimeter wavelength cloud radar can acquire information about the vertical structure of clouds-such as the cloud base height (CBH), CTH and the cloud thickness-and can continuously monitor changes in the vertical profiles of clouds. The CTHs were retrieved using both cloud top temperature data from the FY-2 satellites and the cloud radar reflectivity data for the same time period (June 2015 to May 2016) and the resulting datasets were compared in order to evaluate the accuracy of CTH retrievals using FY-2 satellites. The results show that the concordance rate of cloud detection between the two datasets was 78.1%. Higher consistencies were obtained for thicker clouds with larger echo intensity and for more continuous clouds. The average difference in the CTH between the two techniques was 1.46 km. The difference in CTH between low- and mid-level clouds was less than that for high-level clouds. An attenuation threshold of the cloud radar for rainfall was 0.2 mm/min; a rainfall intensity below this threshold had no effect on the CTH. The satellite CTH can be used to compensate for the attenuation error in the cloud radar data.

  15. Simultaneous multiple view high resolution surface geometry acquisition using structured light and mirrors.

    PubMed

    Basevi, Hector R A; Guggenheim, James A; Dehghani, Hamid; Styles, Iain B

    2013-03-25

    Knowledge of the surface geometry of an imaging subject is important in many applications. This information can be obtained via a number of different techniques, including time of flight imaging, photogrammetry, and fringe projection profilometry. Existing systems may have restrictions on instrument geometry, require expensive optics, or require moving parts in order to image the full surface of the subject. An inexpensive generalised fringe projection profilometry system is proposed that can account for arbitrarily placed components and use mirrors to expand the field of view. It simultaneously acquires multiple views of an imaging subject, producing a cloud of points that lie on its surface, which can then be processed to form a three dimensional model. A prototype of this system was integrated into an existing Diffuse Optical Tomography and Bioluminescence Tomography small animal imaging system and used to image objects including a mouse-shaped plastic phantom, a mouse cadaver, and a coin. A surface mesh generated from surface capture data of the mouse-shaped plastic phantom was compared with ideal surface points provided by the phantom manufacturer, and 50% of points were found to lie within 0.1mm of the surface mesh, 82% of points were found to lie within 0.2mm of the surface mesh, and 96% of points were found to lie within 0.4mm of the surface mesh.

  16. The advanced magnetovision system for Smart application

    NASA Astrophysics Data System (ADS)

    Kaleta, Jerzy; Wiewiórski, Przemyslaw; Lewandowski, Daniel

    2010-04-01

    An original method, measurement devices and software tool for examination of magneto-mechanical phenomena in wide range of SMART applications is proposed. In many Hi-End market constructions it is necessary to carry out examinations of mechanical and magnetic properties simultaneously. Technological processes of fabrication of modern materials (for example cutting, premagnetisation and prestress) and advanced concept of using SMART structures involves the design of next generation system for optimization of electric and magnetic field distribution. The original fast and higher than million point static resolution scanner with mulitsensor probes has been constructed to measure full components of the magnetic field intensity vector H, and to visualize them into end user acceptable variant. The scanner has also the capability to acquire electric potentials on surface to work with magneto-piezo devices. Advanced electronic subsystems have been applied for processing of results in the Magscaner Vison System and the corresponding software - Maglab has been also evaluated. The Dipole Contour Method (DCM) is provided for modeling different states between magnetic and electric coupled materials and to visually explain the information of the experimental data. Dedicated software collaborating with industrial parametric systems CAD. Measurement technique consists of acquiring a cloud of points similarly as in tomography, 3D visualisation. The actually carried verification of abilities of 3D digitizer will enable inspection of SMART actuators with the cylindrical form, pellets with miniature sizes designed for oscillations dampers in various construction, for example in vehicle industry.

  17. Assessment of different models for computing the probability of a clear line of sight

    NASA Astrophysics Data System (ADS)

    Bojin, Sorin; Paulescu, Marius; Badescu, Viorel

    2017-12-01

    This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.

  18. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  19. LANDSAT: US standard catalog, 1 February 1977 - 28 February 1977. [LANDSAT imagery for the month of February 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as data acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  20. LANDSAT non-U.S. standard catalog, 1 January 1977 through 31 January 1977. [LANDSAT imagery January 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which was processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

Top